2026-01-07 00:00:07.513757 | Job console starting 2026-01-07 00:00:07.543769 | Updating git repos 2026-01-07 00:00:07.676446 | Cloning repos into workspace 2026-01-07 00:00:08.099241 | Restoring repo states 2026-01-07 00:00:08.139610 | Merging changes 2026-01-07 00:00:08.139633 | Checking out repos 2026-01-07 00:00:08.732404 | Preparing playbooks 2026-01-07 00:00:09.784803 | Running Ansible setup 2026-01-07 00:00:18.132421 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-07 00:00:20.958367 | 2026-01-07 00:00:20.958553 | PLAY [Base pre] 2026-01-07 00:00:21.002782 | 2026-01-07 00:00:21.019148 | TASK [Setup log path fact] 2026-01-07 00:00:21.059719 | orchestrator | ok 2026-01-07 00:00:21.122714 | 2026-01-07 00:00:21.122914 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 00:00:21.190126 | orchestrator | ok 2026-01-07 00:00:21.261096 | 2026-01-07 00:00:21.261315 | TASK [emit-job-header : Print job information] 2026-01-07 00:00:21.411009 | # Job Information 2026-01-07 00:00:21.411245 | Ansible Version: 2.16.14 2026-01-07 00:00:21.412885 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-07 00:00:21.412957 | Pipeline: periodic-midnight 2026-01-07 00:00:21.412983 | Executor: 521e9411259a 2026-01-07 00:00:21.413006 | Triggered by: https://github.com/osism/testbed 2026-01-07 00:00:21.413028 | Event ID: 461ce70bf2dc497f9380b0f2b29a549d 2026-01-07 00:00:21.428149 | 2026-01-07 00:00:21.428369 | LOOP [emit-job-header : Print node information] 2026-01-07 00:00:21.880431 | orchestrator | ok: 2026-01-07 00:00:21.880768 | orchestrator | # Node Information 2026-01-07 00:00:21.880813 | orchestrator | Inventory Hostname: orchestrator 2026-01-07 00:00:21.880839 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-07 00:00:21.880862 | orchestrator | Username: zuul-testbed04 2026-01-07 00:00:21.880884 | orchestrator | Distro: Debian 12.12 2026-01-07 00:00:21.880909 | orchestrator | Provider: static-testbed 2026-01-07 00:00:21.880930 | orchestrator | Region: 2026-01-07 00:00:21.880951 | orchestrator | Label: testbed-orchestrator 2026-01-07 00:00:21.880972 | orchestrator | Product Name: OpenStack Nova 2026-01-07 00:00:21.880991 | orchestrator | Interface IP: 81.163.193.140 2026-01-07 00:00:21.907276 | 2026-01-07 00:00:21.907433 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:23.561050 | orchestrator -> localhost | changed 2026-01-07 00:00:23.569852 | 2026-01-07 00:00:23.570004 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-07 00:00:27.759541 | orchestrator -> localhost | changed 2026-01-07 00:00:27.806306 | 2026-01-07 00:00:27.806460 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-07 00:00:29.458356 | orchestrator -> localhost | ok 2026-01-07 00:00:29.465839 | 2026-01-07 00:00:29.465974 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-07 00:00:29.537635 | orchestrator | ok 2026-01-07 00:00:29.621256 | orchestrator | included: /var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-07 00:00:29.662809 | 2026-01-07 00:00:29.662983 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-07 00:00:32.383614 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-07 00:00:32.385060 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/f153efee2c894661b1982c7c4bcd0469_id_rsa 2026-01-07 00:00:32.385201 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/f153efee2c894661b1982c7c4bcd0469_id_rsa.pub 2026-01-07 00:00:32.385242 | orchestrator -> localhost | The key fingerprint is: 2026-01-07 00:00:32.385281 | orchestrator -> localhost | SHA256:C/NqA1rbpuwNWllYukeknfgM+gIscNooYnrh8LnL6mA zuul-build-sshkey 2026-01-07 00:00:32.385312 | orchestrator -> localhost | The key's randomart image is: 2026-01-07 00:00:32.385357 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-07 00:00:32.385387 | orchestrator -> localhost | | | 2026-01-07 00:00:32.385415 | orchestrator -> localhost | | | 2026-01-07 00:00:32.385442 | orchestrator -> localhost | | o | 2026-01-07 00:00:32.385467 | orchestrator -> localhost | |. . O . | 2026-01-07 00:00:32.385493 | orchestrator -> localhost | |o= * B S | 2026-01-07 00:00:32.385523 | orchestrator -> localhost | |Ooo.oO + . | 2026-01-07 00:00:32.385549 | orchestrator -> localhost | |BE.==++ o | 2026-01-07 00:00:32.385574 | orchestrator -> localhost | |+.B=.+=. | 2026-01-07 00:00:32.385601 | orchestrator -> localhost | |o+===+o. | 2026-01-07 00:00:32.385627 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-07 00:00:32.385709 | orchestrator -> localhost | ok: Runtime: 0:00:01.023177 2026-01-07 00:00:32.401788 | 2026-01-07 00:00:32.401948 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-07 00:00:32.479321 | orchestrator | ok 2026-01-07 00:00:32.515724 | orchestrator | included: /var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-07 00:00:32.539243 | 2026-01-07 00:00:32.539389 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-07 00:00:32.583615 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:32.592715 | 2026-01-07 00:00:32.592851 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-07 00:00:33.792831 | orchestrator | changed 2026-01-07 00:00:33.801075 | 2026-01-07 00:00:33.801256 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-07 00:00:34.145543 | orchestrator | ok 2026-01-07 00:00:34.165874 | 2026-01-07 00:00:34.166019 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-07 00:00:34.708527 | orchestrator | ok 2026-01-07 00:00:34.725459 | 2026-01-07 00:00:34.725608 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-07 00:00:35.362089 | orchestrator | ok 2026-01-07 00:00:35.384610 | 2026-01-07 00:00:35.384757 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-07 00:00:35.451864 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:35.459447 | 2026-01-07 00:00:35.459582 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-07 00:00:37.307130 | orchestrator -> localhost | changed 2026-01-07 00:00:37.389747 | 2026-01-07 00:00:37.389923 | TASK [add-build-sshkey : Add back temp key] 2026-01-07 00:00:38.446082 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/f153efee2c894661b1982c7c4bcd0469_id_rsa (zuul-build-sshkey) 2026-01-07 00:00:38.446361 | orchestrator -> localhost | ok: Runtime: 0:00:00.056502 2026-01-07 00:00:38.455091 | 2026-01-07 00:00:38.455436 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-07 00:00:39.213839 | orchestrator | ok 2026-01-07 00:00:39.220436 | 2026-01-07 00:00:39.220578 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-07 00:00:39.293758 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:39.464627 | 2026-01-07 00:00:39.464776 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-07 00:00:40.143326 | orchestrator | ok 2026-01-07 00:00:40.178616 | 2026-01-07 00:00:40.178775 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-07 00:00:40.301104 | orchestrator | ok 2026-01-07 00:00:40.325118 | 2026-01-07 00:00:40.325290 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:41.493336 | orchestrator -> localhost | ok 2026-01-07 00:00:41.501340 | 2026-01-07 00:00:41.501472 | TASK [validate-host : Collect information about the host] 2026-01-07 00:00:43.844631 | orchestrator | ok 2026-01-07 00:00:43.872726 | 2026-01-07 00:00:43.872886 | TASK [validate-host : Sanitize hostname] 2026-01-07 00:00:44.015379 | orchestrator | ok 2026-01-07 00:00:44.023876 | 2026-01-07 00:00:44.024013 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-07 00:00:45.913186 | orchestrator -> localhost | changed 2026-01-07 00:00:45.921001 | 2026-01-07 00:00:45.921137 | TASK [validate-host : Collect information about zuul worker] 2026-01-07 00:00:46.904893 | orchestrator | ok 2026-01-07 00:00:46.926617 | 2026-01-07 00:00:46.928297 | TASK [validate-host : Write out all zuul information for each host] 2026-01-07 00:00:49.044449 | orchestrator -> localhost | changed 2026-01-07 00:00:49.066375 | 2026-01-07 00:00:49.066526 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-07 00:00:49.553814 | orchestrator | ok 2026-01-07 00:00:49.564423 | 2026-01-07 00:00:49.564563 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-07 00:02:13.089958 | orchestrator | changed: 2026-01-07 00:02:13.091551 | orchestrator | .d..t...... src/ 2026-01-07 00:02:13.091754 | orchestrator | .d..t...... src/github.com/ 2026-01-07 00:02:13.091844 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-07 00:02:13.091915 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-07 00:02:13.091981 | orchestrator | RedHat.yml 2026-01-07 00:02:13.213824 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-07 00:02:13.213855 | orchestrator | RedHat.yml 2026-01-07 00:02:13.213921 | orchestrator | = 1.53.0"... 2026-01-07 00:02:33.418347 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-07 00:02:33.437618 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-07 00:02:33.982088 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-07 00:02:36.060846 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-07 00:02:36.126265 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-07 00:02:36.714198 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:36.777301 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-07 00:02:37.275573 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:37.275636 | orchestrator | 2026-01-07 00:02:37.275643 | orchestrator | Providers are signed by their developers. 2026-01-07 00:02:37.275648 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-07 00:02:37.275654 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-07 00:02:37.275669 | orchestrator | 2026-01-07 00:02:37.275674 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-07 00:02:37.275678 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-07 00:02:37.275692 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-07 00:02:37.275696 | orchestrator | you run "tofu init" in the future. 2026-01-07 00:02:37.275951 | orchestrator | 2026-01-07 00:02:37.275960 | orchestrator | OpenTofu has been successfully initialized! 2026-01-07 00:02:37.275988 | orchestrator | 2026-01-07 00:02:37.275994 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-07 00:02:37.275998 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-07 00:02:37.276002 | orchestrator | should now work. 2026-01-07 00:02:37.276010 | orchestrator | 2026-01-07 00:02:37.276021 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-07 00:02:37.276025 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-07 00:02:37.276030 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-07 00:02:42.365446 | orchestrator | Created and switched to workspace "ci"! 2026-01-07 00:02:42.365511 | orchestrator | 2026-01-07 00:02:42.365519 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-07 00:02:42.365527 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-07 00:02:42.365533 | orchestrator | for this configuration. 2026-01-07 00:02:42.466607 | orchestrator | ci.auto.tfvars 2026-01-07 00:02:42.846659 | orchestrator | default_custom.tf 2026-01-07 00:02:46.294828 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-07 00:02:46.906655 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-07 00:02:47.173095 | orchestrator | 2026-01-07 00:02:47.173195 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-07 00:02:47.173207 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-07 00:02:47.173246 | orchestrator | + create 2026-01-07 00:02:47.173272 | orchestrator | <= read (data resources) 2026-01-07 00:02:47.173290 | orchestrator | 2026-01-07 00:02:47.173298 | orchestrator | OpenTofu will perform the following actions: 2026-01-07 00:02:47.173468 | orchestrator | 2026-01-07 00:02:47.173489 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-07 00:02:47.173496 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:47.173503 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-07 00:02:47.173511 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:47.173519 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:47.173525 | orchestrator | + file = (known after apply) 2026-01-07 00:02:47.173532 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.173571 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.173578 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:47.173584 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:47.173590 | orchestrator | + most_recent = true 2026-01-07 00:02:47.173597 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.173604 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:47.173610 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.173621 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:47.173627 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:47.173633 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:47.173641 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:47.173648 | orchestrator | } 2026-01-07 00:02:47.173781 | orchestrator | 2026-01-07 00:02:47.173800 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-07 00:02:47.173807 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:47.173814 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-07 00:02:47.173820 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:47.173827 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:47.173833 | orchestrator | + file = (known after apply) 2026-01-07 00:02:47.173839 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.173845 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.173851 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:47.173857 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:47.173864 | orchestrator | + most_recent = true 2026-01-07 00:02:47.173870 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.173876 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:47.173882 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.173890 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:47.173939 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:47.173945 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:47.173951 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:47.173956 | orchestrator | } 2026-01-07 00:02:47.174124 | orchestrator | 2026-01-07 00:02:47.174142 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-07 00:02:47.174150 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-07 00:02:47.174155 | orchestrator | + content = (known after apply) 2026-01-07 00:02:47.174162 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:47.174168 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:47.174174 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:47.174180 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:47.174185 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:47.174191 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:47.174197 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:47.174202 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:47.174208 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-07 00:02:47.174214 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.174219 | orchestrator | } 2026-01-07 00:02:47.174321 | orchestrator | 2026-01-07 00:02:47.174337 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-07 00:02:47.174343 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-07 00:02:47.174350 | orchestrator | + content = (known after apply) 2026-01-07 00:02:47.174356 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:47.174361 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:47.174367 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:47.174373 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:47.174379 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:47.174385 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:47.174391 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:47.174397 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:47.174412 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-07 00:02:47.174420 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.174426 | orchestrator | } 2026-01-07 00:02:47.174533 | orchestrator | 2026-01-07 00:02:47.174561 | orchestrator | # local_file.inventory will be created 2026-01-07 00:02:47.174568 | orchestrator | + resource "local_file" "inventory" { 2026-01-07 00:02:47.174574 | orchestrator | + content = (known after apply) 2026-01-07 00:02:47.174580 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:47.174586 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:47.174591 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:47.174596 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:47.174602 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:47.174607 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:47.174613 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:47.174618 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:47.174623 | orchestrator | + filename = "inventory.ci" 2026-01-07 00:02:47.174628 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.174634 | orchestrator | } 2026-01-07 00:02:47.174734 | orchestrator | 2026-01-07 00:02:47.174750 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-07 00:02:47.174757 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-07 00:02:47.174762 | orchestrator | + content = (sensitive value) 2026-01-07 00:02:47.174767 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:47.174773 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:47.174778 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:47.174783 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:47.174788 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:47.174793 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:47.174799 | orchestrator | + directory_permission = "0700" 2026-01-07 00:02:47.174806 | orchestrator | + file_permission = "0600" 2026-01-07 00:02:47.174812 | orchestrator | + filename = ".id_rsa.ci" 2026-01-07 00:02:47.174817 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.174823 | orchestrator | } 2026-01-07 00:02:47.174850 | orchestrator | 2026-01-07 00:02:47.174863 | orchestrator | # null_resource.node_semaphore will be created 2026-01-07 00:02:47.174869 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-07 00:02:47.174874 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.174880 | orchestrator | } 2026-01-07 00:02:47.174985 | orchestrator | 2026-01-07 00:02:47.174999 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-07 00:02:47.175005 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-07 00:02:47.175011 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175017 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175023 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175030 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175035 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175041 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-07 00:02:47.175047 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175052 | orchestrator | + size = 80 2026-01-07 00:02:47.175057 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175062 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175067 | orchestrator | } 2026-01-07 00:02:47.175159 | orchestrator | 2026-01-07 00:02:47.175177 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-07 00:02:47.175183 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.175189 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175194 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175199 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175212 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175217 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175223 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-07 00:02:47.175229 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175235 | orchestrator | + size = 80 2026-01-07 00:02:47.175240 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175245 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175250 | orchestrator | } 2026-01-07 00:02:47.175342 | orchestrator | 2026-01-07 00:02:47.175356 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-07 00:02:47.175362 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.175367 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175373 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175380 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175385 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175390 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175395 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-07 00:02:47.175400 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175405 | orchestrator | + size = 80 2026-01-07 00:02:47.175411 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175416 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175421 | orchestrator | } 2026-01-07 00:02:47.175512 | orchestrator | 2026-01-07 00:02:47.175526 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-07 00:02:47.175533 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.175538 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175543 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175548 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175553 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175559 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175564 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-07 00:02:47.175569 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175575 | orchestrator | + size = 80 2026-01-07 00:02:47.175582 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175588 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175593 | orchestrator | } 2026-01-07 00:02:47.175684 | orchestrator | 2026-01-07 00:02:47.175698 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-07 00:02:47.175703 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.175708 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175715 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175720 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175725 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175731 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175741 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-07 00:02:47.175746 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175752 | orchestrator | + size = 80 2026-01-07 00:02:47.175757 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175763 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175769 | orchestrator | } 2026-01-07 00:02:47.175861 | orchestrator | 2026-01-07 00:02:47.175875 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-07 00:02:47.175881 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.175888 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.175935 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.175940 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.175951 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.175958 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.175964 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-07 00:02:47.175969 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.175975 | orchestrator | + size = 80 2026-01-07 00:02:47.175981 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.175987 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.175993 | orchestrator | } 2026-01-07 00:02:47.176090 | orchestrator | 2026-01-07 00:02:47.176105 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-07 00:02:47.176111 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:47.176116 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176121 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176126 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176132 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.176137 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176143 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-07 00:02:47.176148 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176153 | orchestrator | + size = 80 2026-01-07 00:02:47.176158 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176164 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176170 | orchestrator | } 2026-01-07 00:02:47.176262 | orchestrator | 2026-01-07 00:02:47.176275 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-07 00:02:47.176281 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176285 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176289 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176292 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176296 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176301 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-07 00:02:47.176305 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176308 | orchestrator | + size = 20 2026-01-07 00:02:47.176312 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176316 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176320 | orchestrator | } 2026-01-07 00:02:47.176384 | orchestrator | 2026-01-07 00:02:47.176395 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-07 00:02:47.176400 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176404 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176407 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176411 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176415 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176419 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-07 00:02:47.176423 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176426 | orchestrator | + size = 20 2026-01-07 00:02:47.176430 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176434 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176438 | orchestrator | } 2026-01-07 00:02:47.176504 | orchestrator | 2026-01-07 00:02:47.176516 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-07 00:02:47.176520 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176524 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176528 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176532 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176536 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176540 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-07 00:02:47.176544 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176553 | orchestrator | + size = 20 2026-01-07 00:02:47.176557 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176561 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176565 | orchestrator | } 2026-01-07 00:02:47.176626 | orchestrator | 2026-01-07 00:02:47.176637 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-07 00:02:47.176641 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176645 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176649 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176653 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176657 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176661 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-07 00:02:47.176664 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176668 | orchestrator | + size = 20 2026-01-07 00:02:47.176672 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176676 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176679 | orchestrator | } 2026-01-07 00:02:47.176738 | orchestrator | 2026-01-07 00:02:47.176749 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-07 00:02:47.176754 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176757 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176761 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176765 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176769 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176773 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-07 00:02:47.176777 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176785 | orchestrator | + size = 20 2026-01-07 00:02:47.176789 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176793 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176797 | orchestrator | } 2026-01-07 00:02:47.176857 | orchestrator | 2026-01-07 00:02:47.176868 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-07 00:02:47.176873 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.176876 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.176880 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.176884 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.176888 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.176908 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-07 00:02:47.176914 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.176921 | orchestrator | + size = 20 2026-01-07 00:02:47.176927 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.176934 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.176940 | orchestrator | } 2026-01-07 00:02:47.177005 | orchestrator | 2026-01-07 00:02:47.177016 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-07 00:02:47.177021 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.177024 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.177028 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.177032 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.177036 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.177039 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-07 00:02:47.177043 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.177047 | orchestrator | + size = 20 2026-01-07 00:02:47.177051 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.177054 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.177058 | orchestrator | } 2026-01-07 00:02:47.177121 | orchestrator | 2026-01-07 00:02:47.177133 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-07 00:02:47.177137 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.177146 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.177150 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.177153 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.177157 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.177161 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-07 00:02:47.177165 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.177168 | orchestrator | + size = 20 2026-01-07 00:02:47.177172 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.177176 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.177180 | orchestrator | } 2026-01-07 00:02:47.177239 | orchestrator | 2026-01-07 00:02:47.177250 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-07 00:02:47.177254 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:47.177258 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:47.177262 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.177266 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.177270 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:47.177274 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-07 00:02:47.177278 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.177281 | orchestrator | + size = 20 2026-01-07 00:02:47.177285 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:47.177289 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:47.177293 | orchestrator | } 2026-01-07 00:02:47.177504 | orchestrator | 2026-01-07 00:02:47.177523 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-07 00:02:47.177527 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-07 00:02:47.177531 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.177535 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.177539 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.177543 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.177546 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.177550 | orchestrator | + config_drive = true 2026-01-07 00:02:47.177554 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.177558 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.177562 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-07 00:02:47.177565 | orchestrator | + force_delete = false 2026-01-07 00:02:47.177569 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.177573 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.177577 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.177580 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.177584 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.177588 | orchestrator | + name = "testbed-manager" 2026-01-07 00:02:47.177592 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.177596 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.177599 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.177603 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.177607 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.177611 | orchestrator | + user_data = (sensitive value) 2026-01-07 00:02:47.177615 | orchestrator | 2026-01-07 00:02:47.177619 | orchestrator | + block_device { 2026-01-07 00:02:47.177623 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.177626 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.177633 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.177637 | orchestrator | + multiattach = false 2026-01-07 00:02:47.177641 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.177645 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.177653 | orchestrator | } 2026-01-07 00:02:47.177657 | orchestrator | 2026-01-07 00:02:47.177661 | orchestrator | + network { 2026-01-07 00:02:47.177664 | orchestrator | + access_network = false 2026-01-07 00:02:47.177668 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.177672 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.177676 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.177680 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.177683 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.177687 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.177691 | orchestrator | } 2026-01-07 00:02:47.177695 | orchestrator | } 2026-01-07 00:02:47.177879 | orchestrator | 2026-01-07 00:02:47.177905 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-07 00:02:47.177912 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.177916 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.177919 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.177923 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.177927 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.177931 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.177934 | orchestrator | + config_drive = true 2026-01-07 00:02:47.177938 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.177942 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.177945 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.177949 | orchestrator | + force_delete = false 2026-01-07 00:02:47.177953 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.177957 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.177960 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.177964 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.177968 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.177972 | orchestrator | + name = "testbed-node-0" 2026-01-07 00:02:47.177975 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.177979 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.177983 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.177986 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.177990 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.177994 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.177998 | orchestrator | 2026-01-07 00:02:47.178002 | orchestrator | + block_device { 2026-01-07 00:02:47.178005 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.178009 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.178049 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.178056 | orchestrator | + multiattach = false 2026-01-07 00:02:47.178061 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.178068 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178074 | orchestrator | } 2026-01-07 00:02:47.178080 | orchestrator | 2026-01-07 00:02:47.178086 | orchestrator | + network { 2026-01-07 00:02:47.178092 | orchestrator | + access_network = false 2026-01-07 00:02:47.178099 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.178106 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.178112 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.178118 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.178124 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.178130 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178134 | orchestrator | } 2026-01-07 00:02:47.178138 | orchestrator | } 2026-01-07 00:02:47.178336 | orchestrator | 2026-01-07 00:02:47.178348 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-07 00:02:47.178353 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.178357 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.178366 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.178369 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.178373 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.178377 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.178381 | orchestrator | + config_drive = true 2026-01-07 00:02:47.178385 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.178389 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.178392 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.178396 | orchestrator | + force_delete = false 2026-01-07 00:02:47.178400 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.178404 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.178408 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.178411 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.178415 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.178419 | orchestrator | + name = "testbed-node-1" 2026-01-07 00:02:47.178423 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.178426 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.178430 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.178434 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.178438 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.178442 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.178445 | orchestrator | 2026-01-07 00:02:47.178449 | orchestrator | + block_device { 2026-01-07 00:02:47.178453 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.178457 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.178461 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.178464 | orchestrator | + multiattach = false 2026-01-07 00:02:47.178468 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.178473 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178480 | orchestrator | } 2026-01-07 00:02:47.178489 | orchestrator | 2026-01-07 00:02:47.178497 | orchestrator | + network { 2026-01-07 00:02:47.178504 | orchestrator | + access_network = false 2026-01-07 00:02:47.178510 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.178516 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.178522 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.178528 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.178534 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.178540 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178547 | orchestrator | } 2026-01-07 00:02:47.178553 | orchestrator | } 2026-01-07 00:02:47.178776 | orchestrator | 2026-01-07 00:02:47.178791 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-07 00:02:47.178796 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.178799 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.178803 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.178808 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.178812 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.178822 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.178826 | orchestrator | + config_drive = true 2026-01-07 00:02:47.178830 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.178834 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.178838 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.178842 | orchestrator | + force_delete = false 2026-01-07 00:02:47.178861 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.178865 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.178869 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.178878 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.178882 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.178885 | orchestrator | + name = "testbed-node-2" 2026-01-07 00:02:47.178889 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.178913 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.178917 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.178921 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.178924 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.178928 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.178932 | orchestrator | 2026-01-07 00:02:47.178935 | orchestrator | + block_device { 2026-01-07 00:02:47.178939 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.178943 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.178947 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.178950 | orchestrator | + multiattach = false 2026-01-07 00:02:47.178954 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.178958 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178961 | orchestrator | } 2026-01-07 00:02:47.178965 | orchestrator | 2026-01-07 00:02:47.178969 | orchestrator | + network { 2026-01-07 00:02:47.178973 | orchestrator | + access_network = false 2026-01-07 00:02:47.178976 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.178980 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.178984 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.178987 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.178991 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.178995 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.178998 | orchestrator | } 2026-01-07 00:02:47.179002 | orchestrator | } 2026-01-07 00:02:47.179194 | orchestrator | 2026-01-07 00:02:47.179205 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-07 00:02:47.179209 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.179213 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.179217 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.179221 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.179225 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.179228 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.179232 | orchestrator | + config_drive = true 2026-01-07 00:02:47.179236 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.179240 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.179243 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.179247 | orchestrator | + force_delete = false 2026-01-07 00:02:47.179251 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.179255 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.179258 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.179262 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.179266 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.179270 | orchestrator | + name = "testbed-node-3" 2026-01-07 00:02:47.179273 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.179277 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.179281 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.179284 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.179288 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.179292 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.179296 | orchestrator | 2026-01-07 00:02:47.179299 | orchestrator | + block_device { 2026-01-07 00:02:47.179312 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.179315 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.179319 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.179327 | orchestrator | + multiattach = false 2026-01-07 00:02:47.179331 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.179335 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.179338 | orchestrator | } 2026-01-07 00:02:47.179342 | orchestrator | 2026-01-07 00:02:47.179346 | orchestrator | + network { 2026-01-07 00:02:47.179350 | orchestrator | + access_network = false 2026-01-07 00:02:47.179353 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.179357 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.179361 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.179364 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.179368 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.179372 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.179375 | orchestrator | } 2026-01-07 00:02:47.179379 | orchestrator | } 2026-01-07 00:02:47.179563 | orchestrator | 2026-01-07 00:02:47.179575 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-07 00:02:47.179580 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.179584 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.179588 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.179592 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.179595 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.179599 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.179603 | orchestrator | + config_drive = true 2026-01-07 00:02:47.179607 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.179610 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.179614 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.179618 | orchestrator | + force_delete = false 2026-01-07 00:02:47.179622 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.179626 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.179629 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.179633 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.179637 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.179641 | orchestrator | + name = "testbed-node-4" 2026-01-07 00:02:47.179644 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.179648 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.179652 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.179656 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.179660 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.179664 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.179667 | orchestrator | 2026-01-07 00:02:47.179671 | orchestrator | + block_device { 2026-01-07 00:02:47.179675 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.179679 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.179682 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.179686 | orchestrator | + multiattach = false 2026-01-07 00:02:47.179690 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.179694 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.179697 | orchestrator | } 2026-01-07 00:02:47.179701 | orchestrator | 2026-01-07 00:02:47.179705 | orchestrator | + network { 2026-01-07 00:02:47.179709 | orchestrator | + access_network = false 2026-01-07 00:02:47.179712 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.179716 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.179720 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.179724 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.179727 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.179731 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.179735 | orchestrator | } 2026-01-07 00:02:47.179739 | orchestrator | } 2026-01-07 00:02:47.179970 | orchestrator | 2026-01-07 00:02:47.179990 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-07 00:02:47.179997 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:47.180004 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:47.180010 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:47.180018 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:47.180023 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.180029 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:47.180035 | orchestrator | + config_drive = true 2026-01-07 00:02:47.180041 | orchestrator | + created = (known after apply) 2026-01-07 00:02:47.180047 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:47.180053 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:47.180059 | orchestrator | + force_delete = false 2026-01-07 00:02:47.180071 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:47.180075 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180079 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:47.180083 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:47.180087 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:47.180090 | orchestrator | + name = "testbed-node-5" 2026-01-07 00:02:47.180094 | orchestrator | + power_state = "active" 2026-01-07 00:02:47.180098 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180102 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:47.180105 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:47.180109 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:47.180113 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:47.180117 | orchestrator | 2026-01-07 00:02:47.180121 | orchestrator | + block_device { 2026-01-07 00:02:47.180124 | orchestrator | + boot_index = 0 2026-01-07 00:02:47.180128 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:47.180132 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:47.180135 | orchestrator | + multiattach = false 2026-01-07 00:02:47.180139 | orchestrator | + source_type = "volume" 2026-01-07 00:02:47.180143 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.180146 | orchestrator | } 2026-01-07 00:02:47.180150 | orchestrator | 2026-01-07 00:02:47.180154 | orchestrator | + network { 2026-01-07 00:02:47.180157 | orchestrator | + access_network = false 2026-01-07 00:02:47.180161 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:47.180165 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:47.180169 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:47.180172 | orchestrator | + name = (known after apply) 2026-01-07 00:02:47.180176 | orchestrator | + port = (known after apply) 2026-01-07 00:02:47.180180 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:47.180183 | orchestrator | } 2026-01-07 00:02:47.180187 | orchestrator | } 2026-01-07 00:02:47.180239 | orchestrator | 2026-01-07 00:02:47.180251 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-07 00:02:47.180255 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-07 00:02:47.180259 | orchestrator | + fingerprint = (known after apply) 2026-01-07 00:02:47.180263 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180267 | orchestrator | + name = "testbed" 2026-01-07 00:02:47.180270 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:47.180274 | orchestrator | + public_key = (known after apply) 2026-01-07 00:02:47.180278 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180282 | orchestrator | + user_id = (known after apply) 2026-01-07 00:02:47.180285 | orchestrator | } 2026-01-07 00:02:47.180325 | orchestrator | 2026-01-07 00:02:47.180336 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-07 00:02:47.180340 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180392 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180397 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180401 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180405 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180409 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180413 | orchestrator | } 2026-01-07 00:02:47.180464 | orchestrator | 2026-01-07 00:02:47.180481 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-07 00:02:47.180487 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180493 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180500 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180504 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180508 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180512 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180515 | orchestrator | } 2026-01-07 00:02:47.180560 | orchestrator | 2026-01-07 00:02:47.180572 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-07 00:02:47.180576 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180580 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180584 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180587 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180591 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180595 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180599 | orchestrator | } 2026-01-07 00:02:47.180641 | orchestrator | 2026-01-07 00:02:47.180652 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-07 00:02:47.180658 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180664 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180670 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180677 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180684 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180690 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180696 | orchestrator | } 2026-01-07 00:02:47.180740 | orchestrator | 2026-01-07 00:02:47.180756 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-07 00:02:47.180765 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180771 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180777 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180783 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180794 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180800 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180806 | orchestrator | } 2026-01-07 00:02:47.180862 | orchestrator | 2026-01-07 00:02:47.180877 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-07 00:02:47.180883 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.180890 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.180910 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.180916 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.180921 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.180928 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.180934 | orchestrator | } 2026-01-07 00:02:47.181002 | orchestrator | 2026-01-07 00:02:47.181015 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-07 00:02:47.181020 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.181024 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.181028 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181032 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.181036 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181047 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.181051 | orchestrator | } 2026-01-07 00:02:47.181110 | orchestrator | 2026-01-07 00:02:47.181127 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-07 00:02:47.181134 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.181141 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.181146 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181151 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.181157 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181163 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.181170 | orchestrator | } 2026-01-07 00:02:47.181226 | orchestrator | 2026-01-07 00:02:47.181244 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-07 00:02:47.181253 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:47.181260 | orchestrator | + device = (known after apply) 2026-01-07 00:02:47.181266 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181272 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:47.181278 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181284 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:47.181289 | orchestrator | } 2026-01-07 00:02:47.181353 | orchestrator | 2026-01-07 00:02:47.181372 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-07 00:02:47.181379 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-07 00:02:47.181386 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:47.181392 | orchestrator | + floating_ip = (known after apply) 2026-01-07 00:02:47.181398 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181404 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:47.181410 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181416 | orchestrator | } 2026-01-07 00:02:47.181536 | orchestrator | 2026-01-07 00:02:47.181557 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-07 00:02:47.181564 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-07 00:02:47.181571 | orchestrator | + address = (known after apply) 2026-01-07 00:02:47.181577 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.181584 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:47.181590 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.181594 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:47.181598 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181602 | orchestrator | + pool = "public" 2026-01-07 00:02:47.181606 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:47.181610 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181614 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.181618 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.181622 | orchestrator | } 2026-01-07 00:02:47.181734 | orchestrator | 2026-01-07 00:02:47.181747 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-07 00:02:47.181751 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-07 00:02:47.181755 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.181759 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.181763 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:47.181767 | orchestrator | + "nova", 2026-01-07 00:02:47.181771 | orchestrator | ] 2026-01-07 00:02:47.181776 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:47.181783 | orchestrator | + external = (known after apply) 2026-01-07 00:02:47.181789 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.181795 | orchestrator | + mtu = (known after apply) 2026-01-07 00:02:47.181803 | orchestrator | + name = "net-testbed-management" 2026-01-07 00:02:47.181809 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.181825 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.181829 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.181833 | orchestrator | + shared = (known after apply) 2026-01-07 00:02:47.181837 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.181841 | orchestrator | + transparent_vlan = (known after apply) 2026-01-07 00:02:47.181846 | orchestrator | 2026-01-07 00:02:47.181852 | orchestrator | + segments (known after apply) 2026-01-07 00:02:47.181858 | orchestrator | } 2026-01-07 00:02:47.182062 | orchestrator | 2026-01-07 00:02:47.182083 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-07 00:02:47.182091 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-07 00:02:47.182097 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.182101 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.182105 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.182117 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.182121 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.182125 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.182129 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.182133 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.182136 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.182140 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.182144 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.182148 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.182152 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.182155 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.182159 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.182163 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.182166 | orchestrator | 2026-01-07 00:02:47.182170 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.182174 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.182178 | orchestrator | } 2026-01-07 00:02:47.182182 | orchestrator | 2026-01-07 00:02:47.182185 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.182189 | orchestrator | 2026-01-07 00:02:47.182193 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.182197 | orchestrator | + ip_address = "192.168.16.5" 2026-01-07 00:02:47.182201 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.182205 | orchestrator | } 2026-01-07 00:02:47.182209 | orchestrator | } 2026-01-07 00:02:47.182437 | orchestrator | 2026-01-07 00:02:47.182457 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-07 00:02:47.182464 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.182470 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.182476 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.182483 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.182489 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.182495 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.182502 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.182508 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.182515 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.182521 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.182527 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.182533 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.182539 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.182545 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.182552 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.182571 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.182578 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.182584 | orchestrator | 2026-01-07 00:02:47.182590 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.182597 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.182603 | orchestrator | } 2026-01-07 00:02:47.182609 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.182615 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.182621 | orchestrator | } 2026-01-07 00:02:47.182628 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.182634 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.182639 | orchestrator | } 2026-01-07 00:02:47.182646 | orchestrator | 2026-01-07 00:02:47.182652 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.182659 | orchestrator | 2026-01-07 00:02:47.182665 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.182671 | orchestrator | + ip_address = "192.168.16.10" 2026-01-07 00:02:47.182677 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.182683 | orchestrator | } 2026-01-07 00:02:47.182690 | orchestrator | } 2026-01-07 00:02:47.183296 | orchestrator | 2026-01-07 00:02:47.183338 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-07 00:02:47.183346 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.183353 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.183363 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.183370 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.183376 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.183382 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.183388 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.183395 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.183400 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.183406 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.183412 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.183416 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.183420 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.183424 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.183429 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.183432 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.183436 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.183498 | orchestrator | 2026-01-07 00:02:47.183515 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183520 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.183535 | orchestrator | } 2026-01-07 00:02:47.183539 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183543 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.183547 | orchestrator | } 2026-01-07 00:02:47.183551 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183555 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.183559 | orchestrator | } 2026-01-07 00:02:47.183563 | orchestrator | 2026-01-07 00:02:47.183567 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.183570 | orchestrator | 2026-01-07 00:02:47.183574 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.183578 | orchestrator | + ip_address = "192.168.16.11" 2026-01-07 00:02:47.183582 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.183586 | orchestrator | } 2026-01-07 00:02:47.183590 | orchestrator | } 2026-01-07 00:02:47.183750 | orchestrator | 2026-01-07 00:02:47.183763 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-07 00:02:47.183767 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.183771 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.183775 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.183779 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.183783 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.183795 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.183799 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.183803 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.183807 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.183816 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.183820 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.183824 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.183828 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.183832 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.183835 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.183839 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.183843 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.183847 | orchestrator | 2026-01-07 00:02:47.183850 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183854 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.183858 | orchestrator | } 2026-01-07 00:02:47.183862 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183865 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.183869 | orchestrator | } 2026-01-07 00:02:47.183873 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.183876 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.183880 | orchestrator | } 2026-01-07 00:02:47.183884 | orchestrator | 2026-01-07 00:02:47.183887 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.183910 | orchestrator | 2026-01-07 00:02:47.183914 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.183918 | orchestrator | + ip_address = "192.168.16.12" 2026-01-07 00:02:47.183922 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.183926 | orchestrator | } 2026-01-07 00:02:47.183929 | orchestrator | } 2026-01-07 00:02:47.184151 | orchestrator | 2026-01-07 00:02:47.184165 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-07 00:02:47.184169 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.184173 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.184177 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.184181 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.184184 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.184188 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.184192 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.184196 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.184215 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.184219 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.184223 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.184226 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.184230 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.184234 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.184238 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.184241 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.184245 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.184249 | orchestrator | 2026-01-07 00:02:47.184253 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184257 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.184261 | orchestrator | } 2026-01-07 00:02:47.184264 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184268 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.184272 | orchestrator | } 2026-01-07 00:02:47.184276 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184279 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.184296 | orchestrator | } 2026-01-07 00:02:47.184300 | orchestrator | 2026-01-07 00:02:47.184314 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.184318 | orchestrator | 2026-01-07 00:02:47.184322 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.184325 | orchestrator | + ip_address = "192.168.16.13" 2026-01-07 00:02:47.184329 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.184333 | orchestrator | } 2026-01-07 00:02:47.184337 | orchestrator | } 2026-01-07 00:02:47.184626 | orchestrator | 2026-01-07 00:02:47.184635 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-07 00:02:47.184641 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.184647 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.184653 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.184659 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.184663 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.184666 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.184670 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.184674 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.184678 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.184681 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.184685 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.184689 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.184693 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.184696 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.184700 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.184704 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.184708 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.184713 | orchestrator | 2026-01-07 00:02:47.184717 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184721 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.184724 | orchestrator | } 2026-01-07 00:02:47.184730 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184737 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.184743 | orchestrator | } 2026-01-07 00:02:47.184750 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184759 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.184765 | orchestrator | } 2026-01-07 00:02:47.184771 | orchestrator | 2026-01-07 00:02:47.184777 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.184783 | orchestrator | 2026-01-07 00:02:47.184789 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.184795 | orchestrator | + ip_address = "192.168.16.14" 2026-01-07 00:02:47.184801 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.184807 | orchestrator | } 2026-01-07 00:02:47.184813 | orchestrator | } 2026-01-07 00:02:47.184822 | orchestrator | 2026-01-07 00:02:47.184829 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-07 00:02:47.184835 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:47.184842 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.184848 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:47.184852 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:47.184856 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.184860 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:47.184864 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:47.184868 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:47.184872 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:47.184875 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.184879 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:47.184883 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.184887 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:47.184928 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:47.184941 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.184945 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:47.184949 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.184953 | orchestrator | 2026-01-07 00:02:47.184957 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184961 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:47.184967 | orchestrator | } 2026-01-07 00:02:47.184973 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184979 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:47.184985 | orchestrator | } 2026-01-07 00:02:47.184991 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:47.184996 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:47.185000 | orchestrator | } 2026-01-07 00:02:47.185004 | orchestrator | 2026-01-07 00:02:47.185013 | orchestrator | + binding (known after apply) 2026-01-07 00:02:47.185017 | orchestrator | 2026-01-07 00:02:47.185021 | orchestrator | + fixed_ip { 2026-01-07 00:02:47.185025 | orchestrator | + ip_address = "192.168.16.15" 2026-01-07 00:02:47.185028 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.185032 | orchestrator | } 2026-01-07 00:02:47.185036 | orchestrator | } 2026-01-07 00:02:47.185040 | orchestrator | 2026-01-07 00:02:47.185044 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-07 00:02:47.185048 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-07 00:02:47.185052 | orchestrator | + force_destroy = false 2026-01-07 00:02:47.185056 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185060 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:47.185063 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185067 | orchestrator | + router_id = (known after apply) 2026-01-07 00:02:47.185071 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:47.185074 | orchestrator | } 2026-01-07 00:02:47.185080 | orchestrator | 2026-01-07 00:02:47.185084 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-07 00:02:47.185088 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-07 00:02:47.185092 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:47.185096 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.185099 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:47.185103 | orchestrator | + "nova", 2026-01-07 00:02:47.185107 | orchestrator | ] 2026-01-07 00:02:47.185111 | orchestrator | + distributed = (known after apply) 2026-01-07 00:02:47.185115 | orchestrator | + enable_snat = (known after apply) 2026-01-07 00:02:47.185118 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-07 00:02:47.185122 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-07 00:02:47.185126 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185130 | orchestrator | + name = "testbed" 2026-01-07 00:02:47.185134 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185137 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185141 | orchestrator | 2026-01-07 00:02:47.185145 | orchestrator | + external_fixed_ip (known after apply) 2026-01-07 00:02:47.185149 | orchestrator | } 2026-01-07 00:02:47.185153 | orchestrator | 2026-01-07 00:02:47.185157 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-07 00:02:47.185161 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-07 00:02:47.185165 | orchestrator | + description = "ssh" 2026-01-07 00:02:47.185168 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185172 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185176 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185180 | orchestrator | + port_range_max = 22 2026-01-07 00:02:47.185183 | orchestrator | + port_range_min = 22 2026-01-07 00:02:47.185187 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:47.185191 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185198 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185202 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185206 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185210 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185213 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185217 | orchestrator | } 2026-01-07 00:02:47.185221 | orchestrator | 2026-01-07 00:02:47.185225 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-07 00:02:47.185228 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-07 00:02:47.185232 | orchestrator | + description = "wireguard" 2026-01-07 00:02:47.185236 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185240 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185243 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185247 | orchestrator | + port_range_max = 51820 2026-01-07 00:02:47.185251 | orchestrator | + port_range_min = 51820 2026-01-07 00:02:47.185255 | orchestrator | + protocol = "udp" 2026-01-07 00:02:47.185258 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185262 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185266 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185270 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185274 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185277 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185281 | orchestrator | } 2026-01-07 00:02:47.185287 | orchestrator | 2026-01-07 00:02:47.185291 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-07 00:02:47.185294 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-07 00:02:47.185298 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185302 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185306 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185310 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:47.185315 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185322 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185328 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185334 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:47.185340 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185350 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185357 | orchestrator | } 2026-01-07 00:02:47.185363 | orchestrator | 2026-01-07 00:02:47.185369 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-07 00:02:47.185374 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-07 00:02:47.185380 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185385 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185391 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185397 | orchestrator | + protocol = "udp" 2026-01-07 00:02:47.185403 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185409 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185416 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185422 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:47.185428 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185433 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185437 | orchestrator | } 2026-01-07 00:02:47.185440 | orchestrator | 2026-01-07 00:02:47.185444 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-07 00:02:47.185453 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-07 00:02:47.185457 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185460 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185464 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185468 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:47.185472 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185475 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185479 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185483 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185486 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185490 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185494 | orchestrator | } 2026-01-07 00:02:47.185498 | orchestrator | 2026-01-07 00:02:47.185501 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-07 00:02:47.185505 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-07 00:02:47.185509 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185513 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185516 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185520 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:47.185524 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185527 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185534 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185538 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185542 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185546 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185549 | orchestrator | } 2026-01-07 00:02:47.185553 | orchestrator | 2026-01-07 00:02:47.185557 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-07 00:02:47.185561 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-07 00:02:47.185564 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185568 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185572 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185575 | orchestrator | + protocol = "udp" 2026-01-07 00:02:47.185579 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185583 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185586 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185590 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185594 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185598 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185602 | orchestrator | } 2026-01-07 00:02:47.185608 | orchestrator | 2026-01-07 00:02:47.185612 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-07 00:02:47.185616 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-07 00:02:47.185620 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185626 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185629 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185633 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:47.185637 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185641 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185644 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185648 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185652 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185655 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185663 | orchestrator | } 2026-01-07 00:02:47.185667 | orchestrator | 2026-01-07 00:02:47.185671 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-07 00:02:47.185674 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-07 00:02:47.185678 | orchestrator | + description = "vrrp" 2026-01-07 00:02:47.185682 | orchestrator | + direction = "ingress" 2026-01-07 00:02:47.185686 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:47.185689 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185693 | orchestrator | + protocol = "112" 2026-01-07 00:02:47.185697 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185701 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:47.185704 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:47.185708 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:47.185712 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:47.185716 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185720 | orchestrator | } 2026-01-07 00:02:47.185723 | orchestrator | 2026-01-07 00:02:47.185727 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-07 00:02:47.185731 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-07 00:02:47.185735 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.185739 | orchestrator | + description = "management security group" 2026-01-07 00:02:47.185742 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185746 | orchestrator | + name = "testbed-management" 2026-01-07 00:02:47.185750 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185754 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:47.185757 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185761 | orchestrator | } 2026-01-07 00:02:47.185765 | orchestrator | 2026-01-07 00:02:47.185769 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-07 00:02:47.185772 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-07 00:02:47.185776 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.185780 | orchestrator | + description = "node security group" 2026-01-07 00:02:47.185784 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185788 | orchestrator | + name = "testbed-node" 2026-01-07 00:02:47.185794 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185801 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:47.185808 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185812 | orchestrator | } 2026-01-07 00:02:47.185816 | orchestrator | 2026-01-07 00:02:47.185819 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-07 00:02:47.185823 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-07 00:02:47.185827 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:47.185831 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-07 00:02:47.185834 | orchestrator | + dns_nameservers = [ 2026-01-07 00:02:47.185840 | orchestrator | + "8.8.8.8", 2026-01-07 00:02:47.185847 | orchestrator | + "9.9.9.9", 2026-01-07 00:02:47.185853 | orchestrator | ] 2026-01-07 00:02:47.185858 | orchestrator | + enable_dhcp = true 2026-01-07 00:02:47.185862 | orchestrator | + gateway_ip = (known after apply) 2026-01-07 00:02:47.185866 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185870 | orchestrator | + ip_version = 4 2026-01-07 00:02:47.185874 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-07 00:02:47.185877 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-07 00:02:47.185883 | orchestrator | + name = "subnet-testbed-management" 2026-01-07 00:02:47.185889 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:47.185910 | orchestrator | + no_gateway = false 2026-01-07 00:02:47.185916 | orchestrator | + region = (known after apply) 2026-01-07 00:02:47.185923 | orchestrator | + service_types = (known after apply) 2026-01-07 00:02:47.185935 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:47.185942 | orchestrator | 2026-01-07 00:02:47.185949 | orchestrator | + allocation_pool { 2026-01-07 00:02:47.185956 | orchestrator | + end = "192.168.31.250" 2026-01-07 00:02:47.185962 | orchestrator | + start = "192.168.31.200" 2026-01-07 00:02:47.185969 | orchestrator | } 2026-01-07 00:02:47.185974 | orchestrator | } 2026-01-07 00:02:47.185978 | orchestrator | 2026-01-07 00:02:47.185981 | orchestrator | # terraform_data.image will be created 2026-01-07 00:02:47.185985 | orchestrator | + resource "terraform_data" "image" { 2026-01-07 00:02:47.185989 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.185993 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:47.185996 | orchestrator | + output = (known after apply) 2026-01-07 00:02:47.186000 | orchestrator | } 2026-01-07 00:02:47.186004 | orchestrator | 2026-01-07 00:02:47.186008 | orchestrator | # terraform_data.image_node will be created 2026-01-07 00:02:47.186042 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-07 00:02:47.186047 | orchestrator | + id = (known after apply) 2026-01-07 00:02:47.186051 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:47.186054 | orchestrator | + output = (known after apply) 2026-01-07 00:02:47.186058 | orchestrator | } 2026-01-07 00:02:47.186062 | orchestrator | 2026-01-07 00:02:47.186066 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-07 00:02:47.186070 | orchestrator | 2026-01-07 00:02:47.186073 | orchestrator | Changes to Outputs: 2026-01-07 00:02:47.186077 | orchestrator | + manager_address = (sensitive value) 2026-01-07 00:02:47.186081 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:47.449443 | orchestrator | terraform_data.image: Creating... 2026-01-07 00:02:47.449539 | orchestrator | terraform_data.image_node: Creating... 2026-01-07 00:02:47.449556 | orchestrator | terraform_data.image: Creation complete after 0s [id=ec8be5e6-8885-30a3-e0ac-34cf48153497] 2026-01-07 00:02:47.449569 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ee742ce0-4da9-f640-c9fd-75d340beefce] 2026-01-07 00:02:47.463918 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-07 00:02:47.464292 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-07 00:02:47.471354 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-07 00:02:47.471453 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-07 00:02:47.471697 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-07 00:02:47.472576 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-07 00:02:47.475204 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-07 00:02:47.475549 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-07 00:02:47.480609 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-07 00:02:47.485135 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-07 00:02:47.930991 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:47.935728 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:47.939130 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-07 00:02:47.943055 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-07 00:02:47.960467 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-07 00:02:47.965476 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-07 00:02:48.474124 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=4f0972df-55a6-4e1e-a5cf-75d6e1eff738] 2026-01-07 00:02:48.486289 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-07 00:02:51.118780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=af026b45-5f1b-4363-b58a-40461e27717d] 2026-01-07 00:02:51.126571 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-07 00:02:51.135107 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=e61db5f0-3441-47eb-ad05-afa38bd974c9] 2026-01-07 00:02:51.136774 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=ed5d5180-ce4b-4e07-b5d5-8188f5330d5a] 2026-01-07 00:02:51.142301 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-07 00:02:51.143639 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-07 00:02:51.155008 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=39bf8256-2574-48b9-8944-112b8c6b12d8] 2026-01-07 00:02:51.165263 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=4dc8db36-ef1a-4565-8e9f-1534b8544abb] 2026-01-07 00:02:51.165576 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-07 00:02:51.177706 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=ac2b7251-4646-40ec-bf32-0660e60c3d83] 2026-01-07 00:02:51.178615 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-07 00:02:51.183825 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-07 00:02:51.237765 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=75141bd6-71d6-4da2-92fc-ccdb5e69cb7e] 2026-01-07 00:02:51.243047 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=bc9ba819-f1d8-4743-b965-7bb37c5542e6] 2026-01-07 00:02:51.256538 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-07 00:02:51.260484 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=451db668-b89a-4789-9627-d8fc1f6d5aa4] 2026-01-07 00:02:51.260563 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-07 00:02:51.264265 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7cc63908cedfd516e1382ef5972c82de24dbab65] 2026-01-07 00:02:51.264654 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-07 00:02:51.268956 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c3a70b0cf938c6849be1bbe332897292748fbe89] 2026-01-07 00:02:51.851526 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=60b09308-f322-4d53-909e-b455cf42df23] 2026-01-07 00:02:52.950110 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=753be417-0b56-434c-804c-8f24e85e8885] 2026-01-07 00:02:52.958199 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-07 00:02:54.556280 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=3b4668cd-eddb-4f75-af43-c9aa7c282e38] 2026-01-07 00:02:54.560800 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ece979bd-9733-4959-9aed-a1dff3c9e3f2] 2026-01-07 00:02:54.615951 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=28c074ed-67b3-4d4b-904c-ddd92b24aa2c] 2026-01-07 00:02:54.627216 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e] 2026-01-07 00:02:54.665280 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=ba965e08-437e-49d6-b05c-3ab1c9739c43] 2026-01-07 00:02:54.703720 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=6d1c7d3d-c80b-49ab-9488-e886539f8993] 2026-01-07 00:02:58.360460 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=80b13706-dfec-45b8-be28-86e1fad9318f] 2026-01-07 00:02:58.371364 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-07 00:02:58.371500 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-07 00:02:58.372708 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-07 00:02:58.822399 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=157eeddc-6b8e-43bc-b90d-d7d990c66ea8] 2026-01-07 00:02:58.839983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-07 00:02:58.841877 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-07 00:02:58.844859 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-07 00:02:58.845013 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-07 00:02:58.847097 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-07 00:02:58.849390 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-07 00:02:58.850558 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-07 00:02:58.856099 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-07 00:02:59.266437 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=2e313c53-b9a6-4dc0-bf13-50ebc9de52ab] 2026-01-07 00:02:59.282322 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-07 00:02:59.581841 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=009b04bb-d39b-4448-9838-7e7c8928da83] 2026-01-07 00:02:59.591072 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-07 00:02:59.854887 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=c66a3e5a-2c78-43f8-aaee-5044c446fa62] 2026-01-07 00:02:59.862383 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-07 00:03:00.008239 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=22849df8-5f2a-4248-a765-2b994e42e50e] 2026-01-07 00:03:00.015590 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-07 00:03:00.034326 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=36ad460e-6aa6-4afd-b7f3-4212f9a6c57c] 2026-01-07 00:03:00.040292 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-07 00:03:00.077517 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=0c65b1a5-6df8-4516-b3bd-cf94c1026fdb] 2026-01-07 00:03:00.083701 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-07 00:03:00.290623 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=419f5c9e-9ef7-49f4-911d-29e0c7a7a219] 2026-01-07 00:03:00.295439 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-07 00:03:00.343905 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=ba17cd6b-8fae-4a26-9725-23a9473ff7ec] 2026-01-07 00:03:00.354966 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-07 00:03:00.497507 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=42271af3-83fe-48df-936f-8587e120d246] 2026-01-07 00:03:00.534480 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=2408a2ef-9a52-4af5-b1aa-a0d5398a6112] 2026-01-07 00:03:00.669247 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=548944d1-5b6f-4301-93dc-79a73a53c188] 2026-01-07 00:03:00.750828 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=bb64e77c-e9cc-4b29-949d-10c48dee1b9c] 2026-01-07 00:03:00.813038 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=40a94084-f2ba-45ed-bd88-3af3671e2962] 2026-01-07 00:03:00.987324 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=b97531c6-c63f-426b-8976-c3d09e98e7c6] 2026-01-07 00:03:01.509726 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=8ee6cc05-2124-4932-ab7d-6c7c4c96cac5] 2026-01-07 00:03:01.805396 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=e9cdee58-56ee-4cfd-902c-e9f38170a462] 2026-01-07 00:03:01.879425 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=ff46f91e-e0cb-4b33-ba1d-3801d7f7ff2c] 2026-01-07 00:03:02.632553 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=685df147-47be-432f-b7d8-6944fa97c810] 2026-01-07 00:03:02.640235 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-07 00:03:02.671093 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-07 00:03:02.672087 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-07 00:03:02.677054 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-07 00:03:02.684602 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-07 00:03:02.686584 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-07 00:03:02.699118 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-07 00:03:04.135320 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=eb9413a1-a01d-46b3-a135-7b5e9fdfdeb7] 2026-01-07 00:03:04.150633 | orchestrator | local_file.inventory: Creating... 2026-01-07 00:03:04.152058 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-07 00:03:04.154908 | orchestrator | local_file.inventory: Creation complete after 0s [id=41ee9071bae7b16934780421a95d927771d0f052] 2026-01-07 00:03:04.156074 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-07 00:03:04.164423 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b4ce328fc7308219dd41f49f69c9363473681769] 2026-01-07 00:03:04.940621 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=eb9413a1-a01d-46b3-a135-7b5e9fdfdeb7] 2026-01-07 00:03:12.672439 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-07 00:03:12.672587 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-07 00:03:12.678753 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-07 00:03:12.688179 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-07 00:03:12.688236 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-07 00:03:12.700496 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-07 00:03:22.672669 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-07 00:03:22.672828 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-07 00:03:22.679996 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-07 00:03:22.689311 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-07 00:03:22.689478 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-07 00:03:22.701672 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-07 00:03:32.674616 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-07 00:03:32.674812 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-07 00:03:32.681010 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-07 00:03:32.690461 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-07 00:03:32.690593 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-07 00:03:32.702858 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-07 00:03:42.683859 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-07 00:03:42.684038 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-07 00:03:42.684061 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-07 00:03:42.691277 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-07 00:03:42.691382 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-07 00:03:42.703774 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-07 00:03:52.693757 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-01-07 00:03:52.693859 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-07 00:03:52.693866 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-07 00:03:52.693871 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-01-07 00:03:52.693876 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-07 00:03:52.704110 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-07 00:03:53.451058 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 50s [id=061f769a-cc3d-452d-8617-f896502bb8a5] 2026-01-07 00:04:02.694307 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-01-07 00:04:02.694415 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-01-07 00:04:02.694425 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-01-07 00:04:02.694440 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-01-07 00:04:02.704765 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-01-07 00:04:12.698519 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m10s elapsed] 2026-01-07 00:04:12.739571 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-01-07 00:04:12.739653 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m10s elapsed] 2026-01-07 00:04:12.739665 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m10s elapsed] 2026-01-07 00:04:12.739692 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-01-07 00:04:13.633272 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m11s [id=06892798-8de4-47b0-8186-67e6863e0c6d] 2026-01-07 00:04:13.674692 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m11s [id=5469611b-489a-4863-a6d7-5ccf8491e237] 2026-01-07 00:04:13.696846 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m11s [id=f2d48f38-7e98-40df-ad20-5cdd0616591c] 2026-01-07 00:04:13.734773 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m11s [id=cc26b84a-0ec6-4848-ad0c-e7e841c04d70] 2026-01-07 00:04:14.193959 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m11s [id=511dec5e-0678-4822-8be1-0b38f888ab3a] 2026-01-07 00:04:14.215252 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-07 00:04:14.232909 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2694090787411555023] 2026-01-07 00:04:14.241505 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-07 00:04:14.241923 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-07 00:04:14.242309 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-07 00:04:14.271645 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-07 00:04:14.279824 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-07 00:04:14.282286 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-07 00:04:14.282816 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-07 00:04:14.294801 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-07 00:04:14.303858 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-07 00:04:14.311673 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-07 00:04:17.655127 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=511dec5e-0678-4822-8be1-0b38f888ab3a/39bf8256-2574-48b9-8944-112b8c6b12d8] 2026-01-07 00:04:17.703015 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5469611b-489a-4863-a6d7-5ccf8491e237/75141bd6-71d6-4da2-92fc-ccdb5e69cb7e] 2026-01-07 00:04:17.756958 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=511dec5e-0678-4822-8be1-0b38f888ab3a/af026b45-5f1b-4363-b58a-40461e27717d] 2026-01-07 00:04:17.795010 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5469611b-489a-4863-a6d7-5ccf8491e237/bc9ba819-f1d8-4743-b965-7bb37c5542e6] 2026-01-07 00:04:17.843741 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=cc26b84a-0ec6-4848-ad0c-e7e841c04d70/e61db5f0-3441-47eb-ad05-afa38bd974c9] 2026-01-07 00:04:17.938383 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=cc26b84a-0ec6-4848-ad0c-e7e841c04d70/451db668-b89a-4789-9627-d8fc1f6d5aa4] 2026-01-07 00:04:23.876629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=511dec5e-0678-4822-8be1-0b38f888ab3a/ac2b7251-4646-40ec-bf32-0660e60c3d83] 2026-01-07 00:04:23.907385 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=5469611b-489a-4863-a6d7-5ccf8491e237/4dc8db36-ef1a-4565-8e9f-1534b8544abb] 2026-01-07 00:04:23.955853 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=cc26b84a-0ec6-4848-ad0c-e7e841c04d70/ed5d5180-ce4b-4e07-b5d5-8188f5330d5a] 2026-01-07 00:04:24.273829 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-07 00:04:34.274555 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-07 00:04:34.756422 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=ae6a2ee5-7238-4f3a-a799-e82877985ead] 2026-01-07 00:04:34.782880 | orchestrator | 2026-01-07 00:04:34.782935 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-07 00:04:34.782974 | orchestrator | 2026-01-07 00:04:34.782983 | orchestrator | Outputs: 2026-01-07 00:04:34.782990 | orchestrator | 2026-01-07 00:04:34.783012 | orchestrator | manager_address = 2026-01-07 00:04:34.783020 | orchestrator | private_key = 2026-01-07 00:04:35.022261 | orchestrator | ok: Runtime: 0:02:01.616485 2026-01-07 00:04:35.056244 | 2026-01-07 00:04:35.056467 | TASK [Create infrastructure (stable)] 2026-01-07 00:04:35.593554 | orchestrator | skipping: Conditional result was False 2026-01-07 00:04:35.611722 | 2026-01-07 00:04:35.611893 | TASK [Fetch manager address] 2026-01-07 00:04:36.118789 | orchestrator | ok 2026-01-07 00:04:36.132524 | 2026-01-07 00:04:36.132741 | TASK [Set manager_host address] 2026-01-07 00:04:36.225002 | orchestrator | ok 2026-01-07 00:04:36.232553 | 2026-01-07 00:04:36.232684 | LOOP [Update ansible collections] 2026-01-07 00:04:37.529479 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:37.529820 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:37.529880 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:37.529922 | orchestrator | Process install dependency map 2026-01-07 00:04:37.529958 | orchestrator | Starting collection install process 2026-01-07 00:04:37.529992 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-01-07 00:04:37.530030 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-01-07 00:04:37.530115 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-07 00:04:37.530204 | orchestrator | ok: Item: commons Runtime: 0:00:00.917533 2026-01-07 00:04:38.731767 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:38.731909 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:38.731963 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:38.732002 | orchestrator | Process install dependency map 2026-01-07 00:04:38.732038 | orchestrator | Starting collection install process 2026-01-07 00:04:38.732071 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-01-07 00:04:38.732125 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-01-07 00:04:38.732157 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-07 00:04:38.732219 | orchestrator | ok: Item: services Runtime: 0:00:00.884322 2026-01-07 00:04:38.751061 | 2026-01-07 00:04:38.751235 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:04:49.328605 | orchestrator | ok 2026-01-07 00:04:49.339796 | 2026-01-07 00:04:49.340005 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:05:49.390825 | orchestrator | ok 2026-01-07 00:05:49.407241 | 2026-01-07 00:05:49.407428 | TASK [Fetch manager ssh hostkey] 2026-01-07 00:05:50.994206 | orchestrator | Output suppressed because no_log was given 2026-01-07 00:05:51.011188 | 2026-01-07 00:05:51.011382 | TASK [Get ssh keypair from terraform environment] 2026-01-07 00:05:51.555481 | orchestrator | ok: Runtime: 0:00:00.011995 2026-01-07 00:05:51.572341 | 2026-01-07 00:05:51.572532 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:05:51.622178 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-07 00:05:51.632597 | 2026-01-07 00:05:51.632750 | TASK [Run manager part 0] 2026-01-07 00:05:52.663873 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:05:52.725571 | orchestrator | 2026-01-07 00:05:52.725667 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-07 00:05:52.725684 | orchestrator | 2026-01-07 00:05:52.725711 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-07 00:05:54.390958 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:54.391025 | orchestrator | 2026-01-07 00:05:54.391048 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:05:54.391058 | orchestrator | 2026-01-07 00:05:54.391067 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:05:56.486862 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:56.486957 | orchestrator | 2026-01-07 00:05:56.486970 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:05:57.187025 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:57.187113 | orchestrator | 2026-01-07 00:05:57.187124 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:05:57.246215 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.246297 | orchestrator | 2026-01-07 00:05:57.246309 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-07 00:05:57.287141 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.287259 | orchestrator | 2026-01-07 00:05:57.287269 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:05:57.325714 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.325789 | orchestrator | 2026-01-07 00:05:57.325796 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:05:57.361418 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.361499 | orchestrator | 2026-01-07 00:05:57.361509 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:05:57.404021 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.404094 | orchestrator | 2026-01-07 00:05:57.404103 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-07 00:05:57.442689 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.442770 | orchestrator | 2026-01-07 00:05:57.442784 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-07 00:05:57.482767 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:57.482837 | orchestrator | 2026-01-07 00:05:57.482845 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-07 00:05:58.222453 | orchestrator | changed: [testbed-manager] 2026-01-07 00:05:58.222521 | orchestrator | 2026-01-07 00:05:58.222531 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-07 00:08:41.227064 | orchestrator | changed: [testbed-manager] 2026-01-07 00:08:41.227224 | orchestrator | 2026-01-07 00:08:41.227246 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:10:17.377563 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:17.377667 | orchestrator | 2026-01-07 00:10:17.377684 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:10:43.411995 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:43.412108 | orchestrator | 2026-01-07 00:10:43.412128 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:10:51.927229 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:51.927274 | orchestrator | 2026-01-07 00:10:51.927282 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:10:51.965248 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:51.965296 | orchestrator | 2026-01-07 00:10:51.965302 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-07 00:10:52.731143 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:52.731247 | orchestrator | 2026-01-07 00:10:52.731267 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-07 00:10:53.490124 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:53.490227 | orchestrator | 2026-01-07 00:10:53.490244 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-07 00:11:01.630047 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:01.630163 | orchestrator | 2026-01-07 00:11:01.630211 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-07 00:11:06.927844 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:06.927970 | orchestrator | 2026-01-07 00:11:06.927989 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-07 00:11:09.657828 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:09.658136 | orchestrator | 2026-01-07 00:11:09.658163 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-07 00:11:11.790880 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:11.790992 | orchestrator | 2026-01-07 00:11:11.791003 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-07 00:11:12.907035 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:11:12.907156 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:11:12.907170 | orchestrator | 2026-01-07 00:11:12.907182 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-07 00:11:12.959048 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:11:12.959537 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:11:12.959553 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:11:12.959567 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:11:16.542576 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:11:16.542660 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:11:16.542674 | orchestrator | 2026-01-07 00:11:16.542685 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-07 00:11:17.123423 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:17.123507 | orchestrator | 2026-01-07 00:11:17.123521 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-07 00:11:36.985325 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-07 00:11:36.985448 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-07 00:11:36.985468 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-07 00:11:36.985481 | orchestrator | 2026-01-07 00:11:36.985494 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-07 00:11:39.324138 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-07 00:11:39.324259 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-07 00:11:39.324276 | orchestrator | 2026-01-07 00:11:39.324288 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-07 00:11:39.324301 | orchestrator | 2026-01-07 00:11:39.324313 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:11:40.731208 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:40.731328 | orchestrator | 2026-01-07 00:11:40.731347 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:11:40.777540 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:40.777610 | orchestrator | 2026-01-07 00:11:40.777620 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:11:40.850102 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:40.850196 | orchestrator | 2026-01-07 00:11:40.850211 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:11:41.682445 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:41.682555 | orchestrator | 2026-01-07 00:11:41.682571 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:11:42.424959 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:42.425011 | orchestrator | 2026-01-07 00:11:42.425019 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:11:43.834497 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-07 00:11:43.834553 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-07 00:11:43.834560 | orchestrator | 2026-01-07 00:11:43.834576 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:11:45.216146 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:45.216287 | orchestrator | 2026-01-07 00:11:45.216303 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:11:47.045023 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:11:47.045121 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-07 00:11:47.045135 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:11:47.045147 | orchestrator | 2026-01-07 00:11:47.045160 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:11:47.124565 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:47.124677 | orchestrator | 2026-01-07 00:11:47.124693 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:11:47.206636 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:47.206715 | orchestrator | 2026-01-07 00:11:47.206725 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:11:47.772367 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:47.772582 | orchestrator | 2026-01-07 00:11:47.772604 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:11:47.856420 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:47.856469 | orchestrator | 2026-01-07 00:11:47.856476 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:11:48.775390 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:11:48.775442 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:48.775450 | orchestrator | 2026-01-07 00:11:48.775456 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:11:48.815219 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:48.815321 | orchestrator | 2026-01-07 00:11:48.815338 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:11:48.856175 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:48.856269 | orchestrator | 2026-01-07 00:11:48.856285 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:11:48.902986 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:48.903036 | orchestrator | 2026-01-07 00:11:48.903046 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:11:48.977151 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:48.977222 | orchestrator | 2026-01-07 00:11:48.977229 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:11:49.709742 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:49.709889 | orchestrator | 2026-01-07 00:11:49.709911 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:11:49.709924 | orchestrator | 2026-01-07 00:11:49.709936 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:11:51.100964 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:51.101069 | orchestrator | 2026-01-07 00:11:51.101087 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-07 00:11:52.074208 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:52.074308 | orchestrator | 2026-01-07 00:11:52.074324 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:11:52.074339 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-07 00:11:52.074352 | orchestrator | 2026-01-07 00:11:52.413404 | orchestrator | ok: Runtime: 0:06:00.222308 2026-01-07 00:11:52.430755 | 2026-01-07 00:11:52.430955 | TASK [Point out that the log in on the manager is now possible] 2026-01-07 00:11:52.477861 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-07 00:11:52.485941 | 2026-01-07 00:11:52.486067 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:11:52.535633 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-07 00:11:52.545328 | 2026-01-07 00:11:52.545483 | TASK [Run manager part 1 + 2] 2026-01-07 00:11:53.494243 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:11:53.550691 | orchestrator | 2026-01-07 00:11:53.550747 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-07 00:11:53.550755 | orchestrator | 2026-01-07 00:11:53.550789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:11:56.262570 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:56.262625 | orchestrator | 2026-01-07 00:11:56.262647 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:11:56.314571 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:56.314628 | orchestrator | 2026-01-07 00:11:56.314640 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:11:56.372517 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:56.372601 | orchestrator | 2026-01-07 00:11:56.372619 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:11:56.428797 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:56.428851 | orchestrator | 2026-01-07 00:11:56.428860 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:11:56.516840 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:56.516897 | orchestrator | 2026-01-07 00:11:56.516907 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:11:56.589091 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:56.589146 | orchestrator | 2026-01-07 00:11:56.589155 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:11:56.649597 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-07 00:11:56.649646 | orchestrator | 2026-01-07 00:11:56.649652 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:11:57.436659 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:57.436730 | orchestrator | 2026-01-07 00:11:57.436743 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:11:57.492028 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:11:57.492085 | orchestrator | 2026-01-07 00:11:57.492093 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:11:58.966595 | orchestrator | changed: [testbed-manager] 2026-01-07 00:11:58.966661 | orchestrator | 2026-01-07 00:11:58.966672 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:11:59.542042 | orchestrator | ok: [testbed-manager] 2026-01-07 00:11:59.542105 | orchestrator | 2026-01-07 00:11:59.542113 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:12:00.668579 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:00.668627 | orchestrator | 2026-01-07 00:12:00.668637 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:12:15.427383 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:15.427453 | orchestrator | 2026-01-07 00:12:15.427462 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:12:16.130133 | orchestrator | ok: [testbed-manager] 2026-01-07 00:12:16.130372 | orchestrator | 2026-01-07 00:12:16.130394 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:12:16.184683 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:12:16.184731 | orchestrator | 2026-01-07 00:12:16.184738 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-07 00:12:17.124234 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:17.124282 | orchestrator | 2026-01-07 00:12:17.124291 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-07 00:12:18.090885 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:18.091004 | orchestrator | 2026-01-07 00:12:18.091021 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-07 00:12:18.686599 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:18.686691 | orchestrator | 2026-01-07 00:12:18.686709 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-07 00:12:18.729108 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:12:18.729193 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:12:18.729199 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:12:18.729205 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:12:21.147052 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:21.147165 | orchestrator | 2026-01-07 00:12:21.147183 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-07 00:12:29.870508 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-07 00:12:29.870624 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-07 00:12:29.870644 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-07 00:12:29.870658 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-07 00:12:29.870680 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-07 00:12:29.870692 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-07 00:12:29.870704 | orchestrator | 2026-01-07 00:12:29.870717 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-07 00:12:30.920101 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:30.920156 | orchestrator | 2026-01-07 00:12:30.920166 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-07 00:12:30.963965 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:12:30.964053 | orchestrator | 2026-01-07 00:12:30.964070 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-07 00:12:33.753434 | orchestrator | changed: [testbed-manager] 2026-01-07 00:12:33.753545 | orchestrator | 2026-01-07 00:12:33.753562 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-07 00:12:33.801797 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:12:33.801890 | orchestrator | 2026-01-07 00:12:33.801906 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-07 00:14:03.859323 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:03.859369 | orchestrator | 2026-01-07 00:14:03.859377 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:14:04.957153 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:04.957216 | orchestrator | 2026-01-07 00:14:04.957232 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:14:04.957246 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-07 00:14:04.957257 | orchestrator | 2026-01-07 00:14:05.189518 | orchestrator | ok: Runtime: 0:02:12.212572 2026-01-07 00:14:05.211103 | 2026-01-07 00:14:05.211390 | TASK [Reboot manager] 2026-01-07 00:14:06.758270 | orchestrator | ok: Runtime: 0:00:00.918838 2026-01-07 00:14:06.776236 | 2026-01-07 00:14:06.776404 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:14:20.494152 | orchestrator | ok 2026-01-07 00:14:20.505928 | 2026-01-07 00:14:20.506112 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:15:20.553829 | orchestrator | ok 2026-01-07 00:15:20.564729 | 2026-01-07 00:15:20.564916 | TASK [Deploy manager + bootstrap nodes] 2026-01-07 00:15:23.030210 | orchestrator | 2026-01-07 00:15:23.030656 | orchestrator | # DEPLOY MANAGER 2026-01-07 00:15:23.030684 | orchestrator | 2026-01-07 00:15:23.030698 | orchestrator | + set -e 2026-01-07 00:15:23.030712 | orchestrator | + echo 2026-01-07 00:15:23.030725 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-07 00:15:23.030743 | orchestrator | + echo 2026-01-07 00:15:23.030789 | orchestrator | + cat /opt/manager-vars.sh 2026-01-07 00:15:23.034284 | orchestrator | export NUMBER_OF_NODES=6 2026-01-07 00:15:23.034315 | orchestrator | 2026-01-07 00:15:23.034328 | orchestrator | export CEPH_VERSION=reef 2026-01-07 00:15:23.034341 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-07 00:15:23.034354 | orchestrator | export MANAGER_VERSION=latest 2026-01-07 00:15:23.034376 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-07 00:15:23.034387 | orchestrator | 2026-01-07 00:15:23.034405 | orchestrator | export ARA=false 2026-01-07 00:15:23.034417 | orchestrator | export DEPLOY_MODE=manager 2026-01-07 00:15:23.034435 | orchestrator | export TEMPEST=true 2026-01-07 00:15:23.034447 | orchestrator | export IS_ZUUL=true 2026-01-07 00:15:23.034458 | orchestrator | 2026-01-07 00:15:23.034477 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:15:23.034489 | orchestrator | export EXTERNAL_API=false 2026-01-07 00:15:23.034500 | orchestrator | 2026-01-07 00:15:23.034510 | orchestrator | export IMAGE_USER=ubuntu 2026-01-07 00:15:23.034525 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-07 00:15:23.034536 | orchestrator | 2026-01-07 00:15:23.034547 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-07 00:15:23.034565 | orchestrator | 2026-01-07 00:15:23.034576 | orchestrator | + echo 2026-01-07 00:15:23.034589 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:15:23.035151 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:15:23.035215 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:15:23.035262 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:15:23.035286 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:15:23.035304 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:15:23.035315 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:15:23.035346 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:15:23.035357 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:15:23.035368 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:15:23.035379 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:15:23.035435 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:15:23.035448 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:15:23.035459 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:15:23.035470 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-07 00:15:23.035490 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-07 00:15:23.035502 | orchestrator | ++ export ARA=false 2026-01-07 00:15:23.035513 | orchestrator | ++ ARA=false 2026-01-07 00:15:23.035523 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:15:23.035534 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:15:23.035545 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:15:23.035555 | orchestrator | ++ TEMPEST=true 2026-01-07 00:15:23.035566 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:15:23.035577 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:15:23.035588 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:15:23.035599 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:15:23.035675 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:15:23.035691 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:15:23.035702 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:15:23.035713 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:15:23.035730 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:15:23.035741 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:15:23.035752 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:15:23.035763 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:15:23.035774 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-07 00:15:23.091774 | orchestrator | + docker version 2026-01-07 00:15:23.332518 | orchestrator | Client: Docker Engine - Community 2026-01-07 00:15:23.332650 | orchestrator | Version: 27.5.1 2026-01-07 00:15:23.332668 | orchestrator | API version: 1.47 2026-01-07 00:15:23.332683 | orchestrator | Go version: go1.22.11 2026-01-07 00:15:23.332694 | orchestrator | Git commit: 9f9e405 2026-01-07 00:15:23.332706 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:15:23.332718 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:15:23.332729 | orchestrator | Context: default 2026-01-07 00:15:23.332740 | orchestrator | 2026-01-07 00:15:23.332752 | orchestrator | Server: Docker Engine - Community 2026-01-07 00:15:23.332763 | orchestrator | Engine: 2026-01-07 00:15:23.332774 | orchestrator | Version: 27.5.1 2026-01-07 00:15:23.332786 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-07 00:15:23.332842 | orchestrator | Go version: go1.22.11 2026-01-07 00:15:23.332855 | orchestrator | Git commit: 4c9b3b0 2026-01-07 00:15:23.332866 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:15:23.332876 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:15:23.332887 | orchestrator | Experimental: false 2026-01-07 00:15:23.332898 | orchestrator | containerd: 2026-01-07 00:15:23.332908 | orchestrator | Version: v2.2.1 2026-01-07 00:15:23.332920 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-07 00:15:23.332931 | orchestrator | runc: 2026-01-07 00:15:23.332942 | orchestrator | Version: 1.3.4 2026-01-07 00:15:23.332953 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-07 00:15:23.332964 | orchestrator | docker-init: 2026-01-07 00:15:23.332975 | orchestrator | Version: 0.19.0 2026-01-07 00:15:23.332987 | orchestrator | GitCommit: de40ad0 2026-01-07 00:15:23.336678 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-07 00:15:23.347223 | orchestrator | + set -e 2026-01-07 00:15:23.347266 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:15:23.347277 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:15:23.347290 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:15:23.347301 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:15:23.347312 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:15:23.347323 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:15:23.347334 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:15:23.347345 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:15:23.347356 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:15:23.347367 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-07 00:15:23.347378 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-07 00:15:23.347389 | orchestrator | ++ export ARA=false 2026-01-07 00:15:23.347400 | orchestrator | ++ ARA=false 2026-01-07 00:15:23.347410 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:15:23.347421 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:15:23.347432 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:15:23.347443 | orchestrator | ++ TEMPEST=true 2026-01-07 00:15:23.347461 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:15:23.347472 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:15:23.347483 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:15:23.347494 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:15:23.347505 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:15:23.347516 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:15:23.347527 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:15:23.347537 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:15:23.347548 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:15:23.347592 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:15:23.347604 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:15:23.347691 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:15:23.347713 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:15:23.347724 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:15:23.347735 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:15:23.347746 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:15:23.347763 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:15:23.347779 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:15:23.347791 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:15:23.347802 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-07 00:15:23.355364 | orchestrator | + set -e 2026-01-07 00:15:23.355401 | orchestrator | + VERSION=reef 2026-01-07 00:15:23.356439 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:15:23.364949 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-07 00:15:23.364980 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:15:23.370255 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-07 00:15:23.376126 | orchestrator | + set -e 2026-01-07 00:15:23.376191 | orchestrator | + VERSION=2025.1 2026-01-07 00:15:23.376869 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:15:23.380768 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-07 00:15:23.380809 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:15:23.385234 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-07 00:15:23.386290 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:15:23.445938 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:15:23.446123 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:15:23.446155 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-07 00:15:23.447053 | orchestrator | ++ semver latest 10.0.0-0 2026-01-07 00:15:23.509199 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:15:23.509298 | orchestrator | ++ semver 2025.1 2025.1 2026-01-07 00:15:23.591923 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-07 00:15:23.592026 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-07 00:15:23.598543 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-07 00:15:23.603588 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-07 00:15:23.697121 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:15:23.699966 | orchestrator | + source /opt/venv/bin/activate 2026-01-07 00:15:23.702391 | orchestrator | ++ deactivate nondestructive 2026-01-07 00:15:23.702431 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:15:23.702444 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:15:23.702455 | orchestrator | ++ hash -r 2026-01-07 00:15:23.702466 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:15:23.702477 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-07 00:15:23.702488 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-07 00:15:23.702500 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-07 00:15:23.702512 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-07 00:15:23.702523 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-07 00:15:23.702534 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-07 00:15:23.702544 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-07 00:15:23.702556 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:15:23.702588 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:15:23.702599 | orchestrator | ++ export PATH 2026-01-07 00:15:23.702637 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:15:23.702649 | orchestrator | ++ '[' -z '' ']' 2026-01-07 00:15:23.702660 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-07 00:15:23.702671 | orchestrator | ++ PS1='(venv) ' 2026-01-07 00:15:23.702683 | orchestrator | ++ export PS1 2026-01-07 00:15:23.702694 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-07 00:15:23.702704 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-07 00:15:23.702715 | orchestrator | ++ hash -r 2026-01-07 00:15:23.702726 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-07 00:15:24.825506 | orchestrator | 2026-01-07 00:15:24.825607 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-07 00:15:24.825648 | orchestrator | 2026-01-07 00:15:24.825660 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:15:25.369748 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:25.369847 | orchestrator | 2026-01-07 00:15:25.369863 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:15:26.326244 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:26.326342 | orchestrator | 2026-01-07 00:15:26.326363 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-07 00:15:26.326376 | orchestrator | 2026-01-07 00:15:26.326388 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:15:28.486046 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:28.486185 | orchestrator | 2026-01-07 00:15:28.487309 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-07 00:15:28.537562 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:28.537689 | orchestrator | 2026-01-07 00:15:28.537707 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-07 00:15:28.973267 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:28.973363 | orchestrator | 2026-01-07 00:15:28.973378 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-07 00:15:29.002197 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:29.002286 | orchestrator | 2026-01-07 00:15:29.002300 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:15:29.303560 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:29.303711 | orchestrator | 2026-01-07 00:15:29.303729 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-07 00:15:29.346592 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:29.346721 | orchestrator | 2026-01-07 00:15:29.346746 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-07 00:15:29.657658 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:29.657755 | orchestrator | 2026-01-07 00:15:29.657772 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-07 00:15:29.771213 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:29.771342 | orchestrator | 2026-01-07 00:15:29.771359 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-07 00:15:29.771372 | orchestrator | 2026-01-07 00:15:29.771384 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:15:31.466832 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:31.466937 | orchestrator | 2026-01-07 00:15:31.466954 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-07 00:15:31.574119 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-07 00:15:31.574216 | orchestrator | 2026-01-07 00:15:31.574230 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-07 00:15:31.620938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-07 00:15:31.621036 | orchestrator | 2026-01-07 00:15:31.621053 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-07 00:15:32.644863 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-07 00:15:32.644983 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-07 00:15:32.644999 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-07 00:15:32.645010 | orchestrator | 2026-01-07 00:15:32.645022 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-07 00:15:34.404832 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-07 00:15:34.404942 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-07 00:15:34.404952 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-07 00:15:34.404960 | orchestrator | 2026-01-07 00:15:34.404968 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-07 00:15:35.024590 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:15:35.024712 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:35.024727 | orchestrator | 2026-01-07 00:15:35.024739 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-07 00:15:35.607387 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:15:35.607527 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:35.607556 | orchestrator | 2026-01-07 00:15:35.607579 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-07 00:15:35.657830 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:35.657928 | orchestrator | 2026-01-07 00:15:35.657943 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-07 00:15:36.017934 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:36.018119 | orchestrator | 2026-01-07 00:15:36.018152 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-07 00:15:36.080568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-07 00:15:36.080725 | orchestrator | 2026-01-07 00:15:36.080752 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-07 00:15:37.086477 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:37.086588 | orchestrator | 2026-01-07 00:15:37.086631 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-07 00:15:37.848509 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:37.848656 | orchestrator | 2026-01-07 00:15:37.848673 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-07 00:15:52.339792 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:52.339904 | orchestrator | 2026-01-07 00:15:52.339920 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-07 00:15:52.396564 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:52.396697 | orchestrator | 2026-01-07 00:15:52.396714 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-07 00:15:52.396726 | orchestrator | 2026-01-07 00:15:52.396737 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:15:54.173256 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:54.173354 | orchestrator | 2026-01-07 00:15:54.173372 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-07 00:15:54.276913 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-07 00:15:54.277013 | orchestrator | 2026-01-07 00:15:54.277030 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-07 00:15:54.328675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:15:54.328778 | orchestrator | 2026-01-07 00:15:54.328797 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-07 00:15:57.004559 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:57.005411 | orchestrator | 2026-01-07 00:15:57.005447 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-07 00:15:57.062381 | orchestrator | ok: [testbed-manager] 2026-01-07 00:15:57.062471 | orchestrator | 2026-01-07 00:15:57.062488 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-07 00:15:57.183141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-07 00:15:57.183240 | orchestrator | 2026-01-07 00:15:57.183267 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-07 00:15:59.882286 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-07 00:15:59.882386 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-07 00:15:59.882403 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-07 00:15:59.882416 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-07 00:15:59.882427 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-07 00:15:59.882438 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-07 00:15:59.882450 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-07 00:15:59.882461 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-07 00:15:59.882472 | orchestrator | 2026-01-07 00:15:59.882484 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-07 00:16:00.489363 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:00.489475 | orchestrator | 2026-01-07 00:16:00.489499 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-07 00:16:01.096433 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:01.096532 | orchestrator | 2026-01-07 00:16:01.096550 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-07 00:16:01.175883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-07 00:16:01.175974 | orchestrator | 2026-01-07 00:16:01.175989 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-07 00:16:02.308755 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-07 00:16:02.351348 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-07 00:16:02.351447 | orchestrator | 2026-01-07 00:16:02.351472 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-07 00:16:02.915197 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:02.915297 | orchestrator | 2026-01-07 00:16:02.915315 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-07 00:16:02.973124 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:16:02.973216 | orchestrator | 2026-01-07 00:16:02.973234 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-07 00:16:03.047400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-07 00:16:03.047527 | orchestrator | 2026-01-07 00:16:03.047545 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-07 00:16:03.652584 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:03.652711 | orchestrator | 2026-01-07 00:16:03.652729 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-07 00:16:03.707504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-07 00:16:03.707596 | orchestrator | 2026-01-07 00:16:03.707636 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-07 00:16:05.042533 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:16:05.042704 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:16:05.042723 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:05.042737 | orchestrator | 2026-01-07 00:16:05.042749 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-07 00:16:05.644741 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:05.644840 | orchestrator | 2026-01-07 00:16:05.644857 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-07 00:16:05.698082 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:16:05.698232 | orchestrator | 2026-01-07 00:16:05.698290 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-07 00:16:05.789650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-07 00:16:05.789747 | orchestrator | 2026-01-07 00:16:05.789762 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-07 00:16:06.314897 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:06.315001 | orchestrator | 2026-01-07 00:16:06.315019 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-07 00:16:06.719959 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:06.720080 | orchestrator | 2026-01-07 00:16:06.720107 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-07 00:16:07.941428 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-07 00:16:07.941541 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-07 00:16:07.941557 | orchestrator | 2026-01-07 00:16:07.941571 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-07 00:16:08.566256 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:08.566353 | orchestrator | 2026-01-07 00:16:08.566369 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-07 00:16:08.944185 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:08.944282 | orchestrator | 2026-01-07 00:16:08.944302 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-07 00:16:09.292127 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:09.292243 | orchestrator | 2026-01-07 00:16:09.292261 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-07 00:16:09.338073 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:16:09.338176 | orchestrator | 2026-01-07 00:16:09.338192 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-07 00:16:09.404175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-07 00:16:09.404267 | orchestrator | 2026-01-07 00:16:09.404283 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-07 00:16:09.446158 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:09.446249 | orchestrator | 2026-01-07 00:16:09.446263 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-07 00:16:11.451281 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-07 00:16:11.451377 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-07 00:16:11.451392 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-07 00:16:11.451405 | orchestrator | 2026-01-07 00:16:11.451420 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-07 00:16:12.145998 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:12.146186 | orchestrator | 2026-01-07 00:16:12.146204 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-07 00:16:12.855810 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:12.855913 | orchestrator | 2026-01-07 00:16:12.855931 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-07 00:16:13.544936 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:13.545058 | orchestrator | 2026-01-07 00:16:13.545075 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-07 00:16:13.625212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-07 00:16:13.625315 | orchestrator | 2026-01-07 00:16:13.625330 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-07 00:16:13.666582 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:13.666706 | orchestrator | 2026-01-07 00:16:13.666720 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-07 00:16:14.334100 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-07 00:16:14.334202 | orchestrator | 2026-01-07 00:16:14.334217 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-07 00:16:14.409737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-07 00:16:14.409824 | orchestrator | 2026-01-07 00:16:14.409839 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-07 00:16:15.074134 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:15.074234 | orchestrator | 2026-01-07 00:16:15.074249 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-07 00:16:15.652709 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:15.652784 | orchestrator | 2026-01-07 00:16:15.652794 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-07 00:16:15.715174 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:16:15.715262 | orchestrator | 2026-01-07 00:16:15.715276 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-07 00:16:15.768841 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:15.768933 | orchestrator | 2026-01-07 00:16:15.768949 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-07 00:16:16.589866 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:16.589954 | orchestrator | 2026-01-07 00:16:16.589969 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-07 00:17:24.534974 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:24.535096 | orchestrator | 2026-01-07 00:17:24.535114 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-07 00:17:25.441006 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:25.441114 | orchestrator | 2026-01-07 00:17:25.441153 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-07 00:17:25.496088 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:25.496175 | orchestrator | 2026-01-07 00:17:25.496185 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-07 00:17:28.348986 | orchestrator | changed: [testbed-manager] 2026-01-07 00:17:28.349077 | orchestrator | 2026-01-07 00:17:28.349091 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-07 00:17:28.399738 | orchestrator | ok: [testbed-manager] 2026-01-07 00:17:28.399847 | orchestrator | 2026-01-07 00:17:28.399873 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:17:28.399896 | orchestrator | 2026-01-07 00:17:28.399917 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-07 00:17:28.444920 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:17:28.444999 | orchestrator | 2026-01-07 00:17:28.445012 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-07 00:18:28.495267 | orchestrator | Pausing for 60 seconds 2026-01-07 00:18:28.495378 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:28.495394 | orchestrator | 2026-01-07 00:18:28.495408 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-07 00:18:31.091408 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:31.091517 | orchestrator | 2026-01-07 00:18:31.091533 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-07 00:19:12.580173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-07 00:19:12.580298 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-07 00:19:12.580316 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:12.580330 | orchestrator | 2026-01-07 00:19:12.580342 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-07 00:19:22.892431 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:22.892605 | orchestrator | 2026-01-07 00:19:22.892633 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-07 00:19:22.974291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-07 00:19:22.974388 | orchestrator | 2026-01-07 00:19:22.974402 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:19:22.974415 | orchestrator | 2026-01-07 00:19:22.974426 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-07 00:19:23.040435 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:23.040532 | orchestrator | 2026-01-07 00:19:23.040558 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-07 00:19:23.110520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-07 00:19:23.110614 | orchestrator | 2026-01-07 00:19:23.110627 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-07 00:19:23.823576 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:23.823695 | orchestrator | 2026-01-07 00:19:23.823708 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-07 00:19:26.619713 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:26.619848 | orchestrator | 2026-01-07 00:19:26.619880 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-07 00:19:26.691811 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:19:26.691938 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-07 00:19:26.691971 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-07 00:19:26.691993 | orchestrator | "Checking running containers against expected versions...", 2026-01-07 00:19:26.692016 | orchestrator | "", 2026-01-07 00:19:26.692036 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-07 00:19:26.692052 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-07 00:19:26.692064 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692076 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-07 00:19:26.692087 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692098 | orchestrator | "", 2026-01-07 00:19:26.692110 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-07 00:19:26.692121 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-07 00:19:26.692132 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692143 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-07 00:19:26.692154 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692165 | orchestrator | "", 2026-01-07 00:19:26.692176 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-07 00:19:26.692187 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-07 00:19:26.692198 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692209 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-07 00:19:26.692220 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692231 | orchestrator | "", 2026-01-07 00:19:26.692242 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-07 00:19:26.692254 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-07 00:19:26.692290 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692304 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-07 00:19:26.692316 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692329 | orchestrator | "", 2026-01-07 00:19:26.692341 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-07 00:19:26.692354 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-07 00:19:26.692367 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692379 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-07 00:19:26.692392 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692403 | orchestrator | "", 2026-01-07 00:19:26.692414 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-07 00:19:26.692425 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.692436 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692447 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.692458 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692469 | orchestrator | "", 2026-01-07 00:19:26.692480 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-07 00:19:26.692491 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:19:26.692502 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692513 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:19:26.692524 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692535 | orchestrator | "", 2026-01-07 00:19:26.692545 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-07 00:19:26.692567 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:19:26.692578 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692595 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:19:26.692606 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692618 | orchestrator | "", 2026-01-07 00:19:26.692629 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-07 00:19:26.692748 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-07 00:19:26.692766 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692783 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-07 00:19:26.692800 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692817 | orchestrator | "", 2026-01-07 00:19:26.692835 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-07 00:19:26.692852 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:19:26.692871 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692887 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:19:26.692904 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.692920 | orchestrator | "", 2026-01-07 00:19:26.692937 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-07 00:19:26.692954 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.692972 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.692989 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693072 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.693092 | orchestrator | "", 2026-01-07 00:19:26.693112 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-07 00:19:26.693130 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693149 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.693161 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693172 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.693183 | orchestrator | "", 2026-01-07 00:19:26.693194 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-07 00:19:26.693204 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693215 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.693226 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693251 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.693268 | orchestrator | "", 2026-01-07 00:19:26.693286 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-07 00:19:26.693303 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693322 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.693342 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693360 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.693376 | orchestrator | "", 2026-01-07 00:19:26.693388 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-07 00:19:26.693422 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693433 | orchestrator | " Enabled: true", 2026-01-07 00:19:26.693444 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-07 00:19:26.693460 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:19:26.693478 | orchestrator | "", 2026-01-07 00:19:26.693495 | orchestrator | "=== Summary ===", 2026-01-07 00:19:26.693512 | orchestrator | "Errors (version mismatches): 0", 2026-01-07 00:19:26.693531 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-07 00:19:26.693549 | orchestrator | "", 2026-01-07 00:19:26.693566 | orchestrator | "✅ All running containers match expected versions!" 2026-01-07 00:19:26.693586 | orchestrator | ] 2026-01-07 00:19:26.693605 | orchestrator | } 2026-01-07 00:19:26.693624 | orchestrator | 2026-01-07 00:19:26.693680 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-07 00:19:26.739311 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:26.739445 | orchestrator | 2026-01-07 00:19:26.739474 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:19:26.739496 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-07 00:19:26.739508 | orchestrator | 2026-01-07 00:19:26.834877 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:19:26.834965 | orchestrator | + deactivate 2026-01-07 00:19:26.834977 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-07 00:19:26.834988 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:19:26.834998 | orchestrator | + export PATH 2026-01-07 00:19:26.835006 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-07 00:19:26.835016 | orchestrator | + '[' -n '' ']' 2026-01-07 00:19:26.835025 | orchestrator | + hash -r 2026-01-07 00:19:26.835033 | orchestrator | + '[' -n '' ']' 2026-01-07 00:19:26.835042 | orchestrator | + unset VIRTUAL_ENV 2026-01-07 00:19:26.835050 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-07 00:19:26.835059 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-07 00:19:26.835068 | orchestrator | + unset -f deactivate 2026-01-07 00:19:26.835077 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-07 00:19:26.842674 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:19:26.842725 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:19:26.842738 | orchestrator | + local max_attempts=60 2026-01-07 00:19:26.842749 | orchestrator | + local name=ceph-ansible 2026-01-07 00:19:26.842760 | orchestrator | + local attempt_num=1 2026-01-07 00:19:26.843633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:19:26.874747 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:19:26.874883 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:19:26.874897 | orchestrator | + local max_attempts=60 2026-01-07 00:19:26.874909 | orchestrator | + local name=kolla-ansible 2026-01-07 00:19:26.874920 | orchestrator | + local attempt_num=1 2026-01-07 00:19:26.874992 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:19:26.902472 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:19:26.902578 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:19:26.902593 | orchestrator | + local max_attempts=60 2026-01-07 00:19:26.902604 | orchestrator | + local name=osism-ansible 2026-01-07 00:19:26.902615 | orchestrator | + local attempt_num=1 2026-01-07 00:19:26.902927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:19:26.934490 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:19:26.934605 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:19:26.934620 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:19:27.591123 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-07 00:19:27.747748 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-07 00:19:27.747852 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747870 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747882 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-07 00:19:27.747895 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-01-07 00:19:27.747906 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747916 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747949 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-01-07 00:19:27.747960 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747971 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-01-07 00:19:27.747982 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.747993 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-01-07 00:19:27.748004 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.748014 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-07 00:19:27.748026 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.748037 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-01-07 00:19:27.753411 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:19:27.798356 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:19:27.798541 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:19:27.798557 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-07 00:19:27.800740 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-07 00:19:40.103522 | orchestrator | 2026-01-07 00:19:40 | INFO  | Task a5a75b1c-e120-4563-a086-4dccd5d334e0 (resolvconf) was prepared for execution. 2026-01-07 00:19:40.103698 | orchestrator | 2026-01-07 00:19:40 | INFO  | It takes a moment until task a5a75b1c-e120-4563-a086-4dccd5d334e0 (resolvconf) has been started and output is visible here. 2026-01-07 00:19:53.605565 | orchestrator | 2026-01-07 00:19:53.605732 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-07 00:19:53.605747 | orchestrator | 2026-01-07 00:19:53.605757 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:19:53.605765 | orchestrator | Wednesday 07 January 2026 00:19:44 +0000 (0:00:00.133) 0:00:00.133 ***** 2026-01-07 00:19:53.605774 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:53.605783 | orchestrator | 2026-01-07 00:19:53.605792 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:19:53.605802 | orchestrator | Wednesday 07 January 2026 00:19:47 +0000 (0:00:03.659) 0:00:03.792 ***** 2026-01-07 00:19:53.605810 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:53.605819 | orchestrator | 2026-01-07 00:19:53.605827 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:19:53.605835 | orchestrator | Wednesday 07 January 2026 00:19:47 +0000 (0:00:00.065) 0:00:03.857 ***** 2026-01-07 00:19:53.605843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-07 00:19:53.605852 | orchestrator | 2026-01-07 00:19:53.605860 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:19:53.605868 | orchestrator | Wednesday 07 January 2026 00:19:48 +0000 (0:00:00.074) 0:00:03.931 ***** 2026-01-07 00:19:53.605886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:19:53.605894 | orchestrator | 2026-01-07 00:19:53.605903 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:19:53.605912 | orchestrator | Wednesday 07 January 2026 00:19:48 +0000 (0:00:00.071) 0:00:04.003 ***** 2026-01-07 00:19:53.605920 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:53.605928 | orchestrator | 2026-01-07 00:19:53.605936 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:19:53.605944 | orchestrator | Wednesday 07 January 2026 00:19:49 +0000 (0:00:01.061) 0:00:05.064 ***** 2026-01-07 00:19:53.605952 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:53.605960 | orchestrator | 2026-01-07 00:19:53.605968 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:19:53.605976 | orchestrator | Wednesday 07 January 2026 00:19:49 +0000 (0:00:00.058) 0:00:05.123 ***** 2026-01-07 00:19:53.605984 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:53.605992 | orchestrator | 2026-01-07 00:19:53.606000 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:19:53.606008 | orchestrator | Wednesday 07 January 2026 00:19:49 +0000 (0:00:00.500) 0:00:05.623 ***** 2026-01-07 00:19:53.606064 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:19:53.606073 | orchestrator | 2026-01-07 00:19:53.606082 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:19:53.606091 | orchestrator | Wednesday 07 January 2026 00:19:49 +0000 (0:00:00.082) 0:00:05.706 ***** 2026-01-07 00:19:53.606099 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:53.606108 | orchestrator | 2026-01-07 00:19:53.606117 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:19:53.606126 | orchestrator | Wednesday 07 January 2026 00:19:50 +0000 (0:00:00.500) 0:00:06.207 ***** 2026-01-07 00:19:53.606135 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:53.606162 | orchestrator | 2026-01-07 00:19:53.606172 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:19:53.606181 | orchestrator | Wednesday 07 January 2026 00:19:51 +0000 (0:00:01.046) 0:00:07.253 ***** 2026-01-07 00:19:53.606190 | orchestrator | ok: [testbed-manager] 2026-01-07 00:19:53.606199 | orchestrator | 2026-01-07 00:19:53.606209 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:19:53.606218 | orchestrator | Wednesday 07 January 2026 00:19:52 +0000 (0:00:00.895) 0:00:08.149 ***** 2026-01-07 00:19:53.606227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-07 00:19:53.606236 | orchestrator | 2026-01-07 00:19:53.606245 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:19:53.606254 | orchestrator | Wednesday 07 January 2026 00:19:52 +0000 (0:00:00.072) 0:00:08.222 ***** 2026-01-07 00:19:53.606263 | orchestrator | changed: [testbed-manager] 2026-01-07 00:19:53.606272 | orchestrator | 2026-01-07 00:19:53.606281 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:19:53.606292 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:19:53.606300 | orchestrator | 2026-01-07 00:19:53.606308 | orchestrator | 2026-01-07 00:19:53.606316 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:19:53.606323 | orchestrator | Wednesday 07 January 2026 00:19:53 +0000 (0:00:01.099) 0:00:09.321 ***** 2026-01-07 00:19:53.606331 | orchestrator | =============================================================================== 2026-01-07 00:19:53.606339 | orchestrator | Gathering Facts --------------------------------------------------------- 3.66s 2026-01-07 00:19:53.606347 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2026-01-07 00:19:53.606355 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2026-01-07 00:19:53.606363 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2026-01-07 00:19:53.606370 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2026-01-07 00:19:53.606378 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2026-01-07 00:19:53.606401 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-07 00:19:53.606410 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-07 00:19:53.606418 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-01-07 00:19:53.606426 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-01-07 00:19:53.606433 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-07 00:19:53.606441 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-07 00:19:53.606449 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-07 00:19:53.863206 | orchestrator | + osism apply sshconfig 2026-01-07 00:20:05.879822 | orchestrator | 2026-01-07 00:20:05 | INFO  | Task 027dfdf8-016b-4deb-83e4-0fd161a8deaf (sshconfig) was prepared for execution. 2026-01-07 00:20:05.879945 | orchestrator | 2026-01-07 00:20:05 | INFO  | It takes a moment until task 027dfdf8-016b-4deb-83e4-0fd161a8deaf (sshconfig) has been started and output is visible here. 2026-01-07 00:20:16.921833 | orchestrator | 2026-01-07 00:20:16.921957 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-07 00:20:16.921976 | orchestrator | 2026-01-07 00:20:16.921989 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-07 00:20:16.922000 | orchestrator | Wednesday 07 January 2026 00:20:09 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-01-07 00:20:16.922012 | orchestrator | ok: [testbed-manager] 2026-01-07 00:20:16.922110 | orchestrator | 2026-01-07 00:20:16.922122 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-07 00:20:16.922134 | orchestrator | Wednesday 07 January 2026 00:20:10 +0000 (0:00:00.536) 0:00:00.691 ***** 2026-01-07 00:20:16.922145 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:16.922157 | orchestrator | 2026-01-07 00:20:16.922168 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-07 00:20:16.922179 | orchestrator | Wednesday 07 January 2026 00:20:10 +0000 (0:00:00.486) 0:00:01.177 ***** 2026-01-07 00:20:16.922190 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:20:16.922201 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:20:16.922244 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:20:16.922257 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:20:16.922268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:20:16.922278 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:20:16.922289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:20:16.922300 | orchestrator | 2026-01-07 00:20:16.922310 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-07 00:20:16.922321 | orchestrator | Wednesday 07 January 2026 00:20:16 +0000 (0:00:05.203) 0:00:06.381 ***** 2026-01-07 00:20:16.922332 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:20:16.922343 | orchestrator | 2026-01-07 00:20:16.922353 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-07 00:20:16.922365 | orchestrator | Wednesday 07 January 2026 00:20:16 +0000 (0:00:00.070) 0:00:06.452 ***** 2026-01-07 00:20:16.922378 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:16.922391 | orchestrator | 2026-01-07 00:20:16.922403 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:20:16.922417 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:20:16.922430 | orchestrator | 2026-01-07 00:20:16.922443 | orchestrator | 2026-01-07 00:20:16.922455 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:20:16.922468 | orchestrator | Wednesday 07 January 2026 00:20:16 +0000 (0:00:00.528) 0:00:06.980 ***** 2026-01-07 00:20:16.922480 | orchestrator | =============================================================================== 2026-01-07 00:20:16.922492 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.20s 2026-01-07 00:20:16.922505 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-01-07 00:20:16.922516 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-01-07 00:20:16.922529 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-01-07 00:20:16.922541 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-07 00:20:17.191252 | orchestrator | + osism apply known-hosts 2026-01-07 00:20:29.304028 | orchestrator | 2026-01-07 00:20:29 | INFO  | Task a9491ce5-d545-4192-a6a4-cfb9aba5ef11 (known-hosts) was prepared for execution. 2026-01-07 00:20:29.304175 | orchestrator | 2026-01-07 00:20:29 | INFO  | It takes a moment until task a9491ce5-d545-4192-a6a4-cfb9aba5ef11 (known-hosts) has been started and output is visible here. 2026-01-07 00:20:44.870400 | orchestrator | 2026-01-07 00:20:44.870554 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-07 00:20:44.870574 | orchestrator | 2026-01-07 00:20:44.870586 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-07 00:20:44.870598 | orchestrator | Wednesday 07 January 2026 00:20:33 +0000 (0:00:00.118) 0:00:00.118 ***** 2026-01-07 00:20:44.870610 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:20:44.870622 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:20:44.870701 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:20:44.870722 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:20:44.870740 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:20:44.870757 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:20:44.870775 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:20:44.870793 | orchestrator | 2026-01-07 00:20:44.870813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-07 00:20:44.870835 | orchestrator | Wednesday 07 January 2026 00:20:38 +0000 (0:00:05.609) 0:00:05.727 ***** 2026-01-07 00:20:44.870856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:20:44.870895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:20:44.870915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:20:44.870935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:20:44.870955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:20:44.870975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:20:44.870996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:20:44.871015 | orchestrator | 2026-01-07 00:20:44.871034 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871048 | orchestrator | Wednesday 07 January 2026 00:20:39 +0000 (0:00:00.140) 0:00:05.868 ***** 2026-01-07 00:20:44.871060 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACNMlDXzBmS+aZADqS1+gyYWbSBxno0KL6Ny6RnVpzy) 2026-01-07 00:20:44.871080 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh0EXt2HiB9St7KgdJ16g25PgZPWrQmm3XnbRtW4pMShC8xhX/WwwfQsqzGXdP0m1zlClnjFUBRmzlcX+TRhkb6U/v/57KtvDGlsDuQD8eI0Q5GyYNJSaQy/BlShwU6AmOfNDxbzNcBQin+HpVhw/kDAeGitlwl8YhOQvcL+fkRV3Hocnw9YDufB9BNdEjoNVDKdewVhxtwSO2DjooePeboswgo6HAJXhuD1+Wrc5kAAIs2rbkCL/BSimQ5A0L+2WUwzDhLhCun3H8zcZi6O3Nc792l0/MxDbZyoKTHp5z5uWXwY+7TedKzUYnQbQ5Y+8ZAboipeiDX0nJXRk8wwLSzpAl2Du+qklU28YtN2fHucppZq0aui6tnw38hwHjaLXHQqZN2/UjZcpJCrufvMkDCXUhUEUeZPku2rM7RyE+wlHimlRQXKl+c2KkRc+7j4Q8dIVTDfXxCFQYfvEGfyx/i3oGNqQ850KqUkNfY86eEC+DS2oi2AkQNrX2NUdpLoU=) 2026-01-07 00:20:44.871103 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOW6MdtDLSj3ew8zxV3HIEPVvmb1Y7NNucYd75lY+qmvxyD6GlS8cvYNJF+G7VNzqShAIXnfIyW4WadU2S99qn0=) 2026-01-07 00:20:44.871118 | orchestrator | 2026-01-07 00:20:44.871131 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871145 | orchestrator | Wednesday 07 January 2026 00:20:39 +0000 (0:00:00.967) 0:00:06.835 ***** 2026-01-07 00:20:44.871180 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNksry9EtkxK4+po84NgogcUhe0Sa99wZf6eVVtjZsjRwbI7r1IekzhU+nYJZgKSuUPimY8e34t7YfIFbeNLo0QKyGiyACdI2u70HPQTZVgOjjQDd7BJZkU98I5nDjaT4AQ5urxtkRkf97ZDwSu7MUoyilLp9uuQHiaUZ0pOYPZayjaVvr0BvxBa4FVdkhrYYjrSXtxhAOCH1febxpkbG6zFUYa+vZfO7/+5Wzi0uni2k1L+5COtgnflnrXdTnMXx14IxQ7Md6acrR4FUaB9Q/kSUQb+lG6/SbYYPk6X85EoFRuF/nt0shDOJyo5OT7Q9SwWpeTmjrw3nBO4jpInpe6K4g/pr1CDpKvgalt5POrj26rSUl5PXNkz70U6inYMQqlXW4hy8RFZr7bxiOc2+quvAovojTnumHPVzpY8hRR5ugScruTcbZRRmlEgCF1/J8f0f8oCEGQ3qJbwYdOX34civXWGLxOf2+/8sy9vAXp3ZCae3K0h4c9QhSJ4+yIU0=) 2026-01-07 00:20:44.871207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3xhdO6ItdG6Fp5JGFbPQdIRcHMUTCmvbaJPFYINDLQkZWArEbp2/qXb7JrJTNiebPSDs4zd/qdpmwY7W6qErQ=) 2026-01-07 00:20:44.871221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINRqOqnKau/cGRrGJVNV0DpnCjH5MxzSXHW0KiOo/FYc) 2026-01-07 00:20:44.871235 | orchestrator | 2026-01-07 00:20:44.871248 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871259 | orchestrator | Wednesday 07 January 2026 00:20:40 +0000 (0:00:00.903) 0:00:07.739 ***** 2026-01-07 00:20:44.871271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqKlZ14q0G4RxQ1tsdmNioCqnSkT0/zmAQ7v2UOZUX/Jm/PQPtwVp+cuoQ3vJ0Qpvn9KGTuZJqoz6rihyWK9TVS0dZhjiVSBxW3E69sDLMFqd3QYulAW52n6/N5aiIDyc2WkDZzoBhLUrPSNE0zADQ7Hiu6YuBdBgb0GtNEKfDXSq9Bz+gJFiDVoBHrXIFyc5o4VPPs9C24oCL48NTX6FHfslBLeL1zyuO/92d/X5KalBnfmMTC1V4XLajbEmooXWFbBhHnf2rI28oDSWbeytLNNuJxGhugZHJ+1gThXAz0w9029xZZMW2CIQNzZbLWHlCyMMY0zMDZiGXM3LDdGwmdsflK+aSYVnd5pxgBv2AmMx2BCAbERMSSFGikSn9NHnZdjrsNlnIfud47Ya28YAFYBGdd0sPnt/jiBp7KXEFXrDhwpOrm3aZM8Nn1p9cUmYHfjgzDdaFx8HwO+JC/sdBQbs2bCLtwYHh4BMSBlqPhYVQ0JZ7XxsJh5Rw7qkDb/E=) 2026-01-07 00:20:44.871283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXWOMEuN8qae0K2NsaVI+84V8sOMu/alqED5GD3/d++xpg3Bkd0KZsEbLxc5oHI4LJJtY87vmX/d8dqkZvcriw=) 2026-01-07 00:20:44.871294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFD13CQ7X8LIMRQrb+XYoVLRJDRBEwWY5+jcsjB7OILn) 2026-01-07 00:20:44.871305 | orchestrator | 2026-01-07 00:20:44.871316 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871405 | orchestrator | Wednesday 07 January 2026 00:20:41 +0000 (0:00:00.994) 0:00:08.733 ***** 2026-01-07 00:20:44.871417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDr21SYj3+1yQc1JOpK0e784nRsRSC9Ly1Qa+xkZwCtB) 2026-01-07 00:20:44.871428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOV8PNKsXAncAz7daF+8gzDLkbmu6LI/AECP2XviZccao6meulhThhFAeS4REcaiJptNOKfrjheySbVLKJAbJZ14rfQ1BVcZTgaYyy4sdNQ15g3D+bPKhf5q8TMn70YIrfFrvBjHDwiq0iT0bdefscU/KBSN1UFaZTs+nijygaDi4JgpYIFJLBec3CZBL3UR1mcPM/pdvo85dp6L3Lm3vS/XSXNbeH7/02MfVEtPxInwYBU4oSdlpOE97mJh4u5KuI787T+uPtP9/FxrbeBxac1uKlDS2qaZT54i9JrJuCjdBRDWiKwhS/mVxlhtKIiGA/c4TnZATL24uhoxAFTFbKG2GoExFjfeRfvObzLz4qFMSYsEcEtymUXUKTN2fhYGT9qGi+s/QN05dq8EFJ/IXSbkQ1Y7TDUZNrYlAwBQMuvSO7DPnFC9k01x0gxZHrEaPjTx58X7MkEA0wAwsLj775f/ppagLvrhK4vJwu0QDKy6tPN8LONZetRzgLzf9eNZ8=) 2026-01-07 00:20:44.871439 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+2Vmfzg8DCbmxSRmVuq9gQwjKbj8AVJ2ja3fldELa8juRBMr2ZTIgPnb/yfY8NWRguJN9AfYoYzjRUrgW2aW0=) 2026-01-07 00:20:44.871450 | orchestrator | 2026-01-07 00:20:44.871461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871472 | orchestrator | Wednesday 07 January 2026 00:20:42 +0000 (0:00:01.000) 0:00:09.734 ***** 2026-01-07 00:20:44.871483 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPjSaVs4lIzArHSovvyBotVULCCt9/1lBSJErLTgGCMx) 2026-01-07 00:20:44.871494 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClx32r+T7icavtVzzmo4mMlvUrI1FHDLiyvRYokpA2D41M5kXGUzdLKwM9zgaw6RXkD5lVH2+TAUT3mPo2LjY3ER5C2FbmviD2WT0K8fj6HCnYAH4Estae4w3NVFYbTcBGJtxJXdxJLtchHVwL5aERS6vJY/vQb6tVe96Klm02wqD5Gw+4HQ/Ukwuel6fZdz+MtUBJIMWc22FwXOAb1RILEvGBkOBx/y09CjH4Dqe1jcCwUdozux+XmQHQVLy6TszeOd3r9c452lwW4f3lwm3/+r/X2UwaDBb8CgYcitoNh3+lHioWGm3j/ppxYOrZmPL9jk/glhuQlSk3Bz+HjsePWBj2u1oXUjdyzCC28r+xSEAcznj9C/Vvyj+QbSuq14g1xQ7PJKabgEIOiglWnnbIlVkTftBzDgf1cF0UWEu0ycnAZACYbFDZ572mE6YW1zpTeBxb9zIbeB5biZ4qg/NUiEYJ5WhF10EWBge82MReVJEjX4qamgg3k2iZdn1fEBM=) 2026-01-07 00:20:44.871512 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPl7dkhQ87B8aeBdDBhf7bucPrPP/L8rV92U00dnn11PEnwkWvqnuYUWQSsPshQ/Oq8ybKGsHaP32jVuwDkxfs=) 2026-01-07 00:20:44.871523 | orchestrator | 2026-01-07 00:20:44.871534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:44.871545 | orchestrator | Wednesday 07 January 2026 00:20:43 +0000 (0:00:01.000) 0:00:10.734 ***** 2026-01-07 00:20:44.871564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqQsY/oAoyJByF+UVxbQ0Ibk80CfnMnkH8vgkDObQCq) 2026-01-07 00:20:56.248259 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBWqCv4+JMZgvzy82WQvq1ZQafMGWVc2415aB1giq2l+0XoWdKwXSSdnxvx/GaV6IJ1BXdB+QJb1YlpnI6ZUaTi+c8VKbToFC+35eRSdetpYqroGkGkQai8NPvyPnTPJQHGUPY6/OI4eTo0koFriMpB+teU5hyIdg+mCpsUHAfSaa7icFazWDk+RBpsgFNobPhsYJrjUNlPZ/uPZyRpa2xBHxY2c0v+yRiH/4bqAWH293k7GSvKWr+Thbc3qEVeH1ctRN/5+5ZX/+ohn29kZ7Z4XEPsWdNW/E/u8v0paGjFrZ0HzJ/Dmpizeb+sShI+No1oUUU6JgTvroefBcunmZBAyuS+zUwIRtZusY35O5YtRP36YcWAQ0dKuFJZcX7pC61PipPeQySyvFPfWjKzpSV9xx4l9pHl+YxYtLZhmOPGzmJZlZ2tej3cjBYddUXck55v7DVvTSkUOuNWEl/smSRIr96R3CbdVvxdoDPDAvwbM4T8al7zOPdYlw6Dd6vc6s=) 2026-01-07 00:20:56.248381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOzlVctA/xIF+6sslTPMCW6oZYHWjGRrrprmn9pcTQLpl0l99BGAXvysUhr865jjhj5ZY3WG3AKh1JAS9pCW3J0=) 2026-01-07 00:20:56.248399 | orchestrator | 2026-01-07 00:20:56.248413 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:56.248427 | orchestrator | Wednesday 07 January 2026 00:20:44 +0000 (0:00:00.978) 0:00:11.712 ***** 2026-01-07 00:20:56.248438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA9akWZs4kpvRr6POn959nBx6u9HYP6hsmSCzN0GTMCC) 2026-01-07 00:20:56.248451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqN1OsJcUZi3WIXmnqkkAqWbVfH5hFLetVcSFJdB1AL+NkyJzbWGG2TkQUFMf81lIij5pRoXWD9tiPjNorX4j2Bw59A3qlqsKU40ppQN7tSknhzmo/8O5MMtWn4TxVD0lnJuFLArQ9An52s+ZPTj7XX/YApJuy95iDHmvIYMAKlkRPhsVFzDz4HaDezHbAAURjDjYVn2pPq4njBBK44H54UIDFHmW7oOI7RMsylhSuEl8FOs7bZeDYtj5Qo3osRafmUUhW2W8EtgecLGwfFgVGOBDxrLykfaRt9m/NYBzoZZrNfEbXOB24fusoES6WzQWEzn8nKh+ilC2Zce892C33TgwK/jtVnCOdcsR82gOsiVgs1pcrRIiixm22PHlE+Us5Dd1vCgW+GEBAFdxacmFJy5NCZYNwB7DF80Vw9tJTzxXT+ScuZxPU5Mue2qZq2k+UiLEI8ZFcouwLD40S2t2uD9F7KYehqsL8GKOYjuZQVqGZFgegeOZAth5uYQSbEeU=) 2026-01-07 00:20:56.248463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFpjXDVkyoaLCsSthTCgULynLJU7NEcJjQBO96XZO+NgYamVZITtzRbRBWlR4A2tKk9e/BdRiOZtJVW0c1o5p/Y=) 2026-01-07 00:20:56.248474 | orchestrator | 2026-01-07 00:20:56.248486 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-07 00:20:56.248498 | orchestrator | Wednesday 07 January 2026 00:20:45 +0000 (0:00:01.009) 0:00:12.722 ***** 2026-01-07 00:20:56.248510 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:20:56.248522 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:20:56.248533 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:20:56.248544 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:20:56.248555 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:20:56.248566 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:20:56.248604 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:20:56.248615 | orchestrator | 2026-01-07 00:20:56.248626 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-07 00:20:56.248714 | orchestrator | Wednesday 07 January 2026 00:20:51 +0000 (0:00:05.141) 0:00:17.864 ***** 2026-01-07 00:20:56.248731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:20:56.248744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:20:56.248755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:20:56.248766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:20:56.248777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:20:56.248788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:20:56.248799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:20:56.248810 | orchestrator | 2026-01-07 00:20:56.248836 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:56.248848 | orchestrator | Wednesday 07 January 2026 00:20:51 +0000 (0:00:00.182) 0:00:18.046 ***** 2026-01-07 00:20:56.248859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOW6MdtDLSj3ew8zxV3HIEPVvmb1Y7NNucYd75lY+qmvxyD6GlS8cvYNJF+G7VNzqShAIXnfIyW4WadU2S99qn0=) 2026-01-07 00:20:56.248871 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh0EXt2HiB9St7KgdJ16g25PgZPWrQmm3XnbRtW4pMShC8xhX/WwwfQsqzGXdP0m1zlClnjFUBRmzlcX+TRhkb6U/v/57KtvDGlsDuQD8eI0Q5GyYNJSaQy/BlShwU6AmOfNDxbzNcBQin+HpVhw/kDAeGitlwl8YhOQvcL+fkRV3Hocnw9YDufB9BNdEjoNVDKdewVhxtwSO2DjooePeboswgo6HAJXhuD1+Wrc5kAAIs2rbkCL/BSimQ5A0L+2WUwzDhLhCun3H8zcZi6O3Nc792l0/MxDbZyoKTHp5z5uWXwY+7TedKzUYnQbQ5Y+8ZAboipeiDX0nJXRk8wwLSzpAl2Du+qklU28YtN2fHucppZq0aui6tnw38hwHjaLXHQqZN2/UjZcpJCrufvMkDCXUhUEUeZPku2rM7RyE+wlHimlRQXKl+c2KkRc+7j4Q8dIVTDfXxCFQYfvEGfyx/i3oGNqQ850KqUkNfY86eEC+DS2oi2AkQNrX2NUdpLoU=) 2026-01-07 00:20:56.248883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACNMlDXzBmS+aZADqS1+gyYWbSBxno0KL6Ny6RnVpzy) 2026-01-07 00:20:56.248894 | orchestrator | 2026-01-07 00:20:56.248905 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:56.248916 | orchestrator | Wednesday 07 January 2026 00:20:53 +0000 (0:00:02.002) 0:00:20.049 ***** 2026-01-07 00:20:56.248928 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNksry9EtkxK4+po84NgogcUhe0Sa99wZf6eVVtjZsjRwbI7r1IekzhU+nYJZgKSuUPimY8e34t7YfIFbeNLo0QKyGiyACdI2u70HPQTZVgOjjQDd7BJZkU98I5nDjaT4AQ5urxtkRkf97ZDwSu7MUoyilLp9uuQHiaUZ0pOYPZayjaVvr0BvxBa4FVdkhrYYjrSXtxhAOCH1febxpkbG6zFUYa+vZfO7/+5Wzi0uni2k1L+5COtgnflnrXdTnMXx14IxQ7Md6acrR4FUaB9Q/kSUQb+lG6/SbYYPk6X85EoFRuF/nt0shDOJyo5OT7Q9SwWpeTmjrw3nBO4jpInpe6K4g/pr1CDpKvgalt5POrj26rSUl5PXNkz70U6inYMQqlXW4hy8RFZr7bxiOc2+quvAovojTnumHPVzpY8hRR5ugScruTcbZRRmlEgCF1/J8f0f8oCEGQ3qJbwYdOX34civXWGLxOf2+/8sy9vAXp3ZCae3K0h4c9QhSJ4+yIU0=) 2026-01-07 00:20:56.248948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3xhdO6ItdG6Fp5JGFbPQdIRcHMUTCmvbaJPFYINDLQkZWArEbp2/qXb7JrJTNiebPSDs4zd/qdpmwY7W6qErQ=) 2026-01-07 00:20:56.248960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINRqOqnKau/cGRrGJVNV0DpnCjH5MxzSXHW0KiOo/FYc) 2026-01-07 00:20:56.248970 | orchestrator | 2026-01-07 00:20:56.248981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:56.248993 | orchestrator | Wednesday 07 January 2026 00:20:54 +0000 (0:00:01.025) 0:00:21.074 ***** 2026-01-07 00:20:56.249004 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqKlZ14q0G4RxQ1tsdmNioCqnSkT0/zmAQ7v2UOZUX/Jm/PQPtwVp+cuoQ3vJ0Qpvn9KGTuZJqoz6rihyWK9TVS0dZhjiVSBxW3E69sDLMFqd3QYulAW52n6/N5aiIDyc2WkDZzoBhLUrPSNE0zADQ7Hiu6YuBdBgb0GtNEKfDXSq9Bz+gJFiDVoBHrXIFyc5o4VPPs9C24oCL48NTX6FHfslBLeL1zyuO/92d/X5KalBnfmMTC1V4XLajbEmooXWFbBhHnf2rI28oDSWbeytLNNuJxGhugZHJ+1gThXAz0w9029xZZMW2CIQNzZbLWHlCyMMY0zMDZiGXM3LDdGwmdsflK+aSYVnd5pxgBv2AmMx2BCAbERMSSFGikSn9NHnZdjrsNlnIfud47Ya28YAFYBGdd0sPnt/jiBp7KXEFXrDhwpOrm3aZM8Nn1p9cUmYHfjgzDdaFx8HwO+JC/sdBQbs2bCLtwYHh4BMSBlqPhYVQ0JZ7XxsJh5Rw7qkDb/E=) 2026-01-07 00:20:56.249016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXWOMEuN8qae0K2NsaVI+84V8sOMu/alqED5GD3/d++xpg3Bkd0KZsEbLxc5oHI4LJJtY87vmX/d8dqkZvcriw=) 2026-01-07 00:20:56.249027 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFD13CQ7X8LIMRQrb+XYoVLRJDRBEwWY5+jcsjB7OILn) 2026-01-07 00:20:56.249038 | orchestrator | 2026-01-07 00:20:56.249049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:20:56.249060 | orchestrator | Wednesday 07 January 2026 00:20:55 +0000 (0:00:01.016) 0:00:22.090 ***** 2026-01-07 00:20:56.249071 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDr21SYj3+1yQc1JOpK0e784nRsRSC9Ly1Qa+xkZwCtB) 2026-01-07 00:20:56.249101 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOV8PNKsXAncAz7daF+8gzDLkbmu6LI/AECP2XviZccao6meulhThhFAeS4REcaiJptNOKfrjheySbVLKJAbJZ14rfQ1BVcZTgaYyy4sdNQ15g3D+bPKhf5q8TMn70YIrfFrvBjHDwiq0iT0bdefscU/KBSN1UFaZTs+nijygaDi4JgpYIFJLBec3CZBL3UR1mcPM/pdvo85dp6L3Lm3vS/XSXNbeH7/02MfVEtPxInwYBU4oSdlpOE97mJh4u5KuI787T+uPtP9/FxrbeBxac1uKlDS2qaZT54i9JrJuCjdBRDWiKwhS/mVxlhtKIiGA/c4TnZATL24uhoxAFTFbKG2GoExFjfeRfvObzLz4qFMSYsEcEtymUXUKTN2fhYGT9qGi+s/QN05dq8EFJ/IXSbkQ1Y7TDUZNrYlAwBQMuvSO7DPnFC9k01x0gxZHrEaPjTx58X7MkEA0wAwsLj775f/ppagLvrhK4vJwu0QDKy6tPN8LONZetRzgLzf9eNZ8=) 2026-01-07 00:21:00.354392 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+2Vmfzg8DCbmxSRmVuq9gQwjKbj8AVJ2ja3fldELa8juRBMr2ZTIgPnb/yfY8NWRguJN9AfYoYzjRUrgW2aW0=) 2026-01-07 00:21:00.354502 | orchestrator | 2026-01-07 00:21:00.354520 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:21:00.354535 | orchestrator | Wednesday 07 January 2026 00:20:56 +0000 (0:00:01.001) 0:00:23.092 ***** 2026-01-07 00:21:00.354554 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClx32r+T7icavtVzzmo4mMlvUrI1FHDLiyvRYokpA2D41M5kXGUzdLKwM9zgaw6RXkD5lVH2+TAUT3mPo2LjY3ER5C2FbmviD2WT0K8fj6HCnYAH4Estae4w3NVFYbTcBGJtxJXdxJLtchHVwL5aERS6vJY/vQb6tVe96Klm02wqD5Gw+4HQ/Ukwuel6fZdz+MtUBJIMWc22FwXOAb1RILEvGBkOBx/y09CjH4Dqe1jcCwUdozux+XmQHQVLy6TszeOd3r9c452lwW4f3lwm3/+r/X2UwaDBb8CgYcitoNh3+lHioWGm3j/ppxYOrZmPL9jk/glhuQlSk3Bz+HjsePWBj2u1oXUjdyzCC28r+xSEAcznj9C/Vvyj+QbSuq14g1xQ7PJKabgEIOiglWnnbIlVkTftBzDgf1cF0UWEu0ycnAZACYbFDZ572mE6YW1zpTeBxb9zIbeB5biZ4qg/NUiEYJ5WhF10EWBge82MReVJEjX4qamgg3k2iZdn1fEBM=) 2026-01-07 00:21:00.354569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPl7dkhQ87B8aeBdDBhf7bucPrPP/L8rV92U00dnn11PEnwkWvqnuYUWQSsPshQ/Oq8ybKGsHaP32jVuwDkxfs=) 2026-01-07 00:21:00.354605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPjSaVs4lIzArHSovvyBotVULCCt9/1lBSJErLTgGCMx) 2026-01-07 00:21:00.354617 | orchestrator | 2026-01-07 00:21:00.354629 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:21:00.354639 | orchestrator | Wednesday 07 January 2026 00:20:57 +0000 (0:00:01.009) 0:00:24.101 ***** 2026-01-07 00:21:00.354682 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqQsY/oAoyJByF+UVxbQ0Ibk80CfnMnkH8vgkDObQCq) 2026-01-07 00:21:00.354696 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBWqCv4+JMZgvzy82WQvq1ZQafMGWVc2415aB1giq2l+0XoWdKwXSSdnxvx/GaV6IJ1BXdB+QJb1YlpnI6ZUaTi+c8VKbToFC+35eRSdetpYqroGkGkQai8NPvyPnTPJQHGUPY6/OI4eTo0koFriMpB+teU5hyIdg+mCpsUHAfSaa7icFazWDk+RBpsgFNobPhsYJrjUNlPZ/uPZyRpa2xBHxY2c0v+yRiH/4bqAWH293k7GSvKWr+Thbc3qEVeH1ctRN/5+5ZX/+ohn29kZ7Z4XEPsWdNW/E/u8v0paGjFrZ0HzJ/Dmpizeb+sShI+No1oUUU6JgTvroefBcunmZBAyuS+zUwIRtZusY35O5YtRP36YcWAQ0dKuFJZcX7pC61PipPeQySyvFPfWjKzpSV9xx4l9pHl+YxYtLZhmOPGzmJZlZ2tej3cjBYddUXck55v7DVvTSkUOuNWEl/smSRIr96R3CbdVvxdoDPDAvwbM4T8al7zOPdYlw6Dd6vc6s=) 2026-01-07 00:21:00.354708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOzlVctA/xIF+6sslTPMCW6oZYHWjGRrrprmn9pcTQLpl0l99BGAXvysUhr865jjhj5ZY3WG3AKh1JAS9pCW3J0=) 2026-01-07 00:21:00.354719 | orchestrator | 2026-01-07 00:21:00.354730 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:21:00.354748 | orchestrator | Wednesday 07 January 2026 00:20:58 +0000 (0:00:00.979) 0:00:25.081 ***** 2026-01-07 00:21:00.354767 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqN1OsJcUZi3WIXmnqkkAqWbVfH5hFLetVcSFJdB1AL+NkyJzbWGG2TkQUFMf81lIij5pRoXWD9tiPjNorX4j2Bw59A3qlqsKU40ppQN7tSknhzmo/8O5MMtWn4TxVD0lnJuFLArQ9An52s+ZPTj7XX/YApJuy95iDHmvIYMAKlkRPhsVFzDz4HaDezHbAAURjDjYVn2pPq4njBBK44H54UIDFHmW7oOI7RMsylhSuEl8FOs7bZeDYtj5Qo3osRafmUUhW2W8EtgecLGwfFgVGOBDxrLykfaRt9m/NYBzoZZrNfEbXOB24fusoES6WzQWEzn8nKh+ilC2Zce892C33TgwK/jtVnCOdcsR82gOsiVgs1pcrRIiixm22PHlE+Us5Dd1vCgW+GEBAFdxacmFJy5NCZYNwB7DF80Vw9tJTzxXT+ScuZxPU5Mue2qZq2k+UiLEI8ZFcouwLD40S2t2uD9F7KYehqsL8GKOYjuZQVqGZFgegeOZAth5uYQSbEeU=) 2026-01-07 00:21:00.354786 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFpjXDVkyoaLCsSthTCgULynLJU7NEcJjQBO96XZO+NgYamVZITtzRbRBWlR4A2tKk9e/BdRiOZtJVW0c1o5p/Y=) 2026-01-07 00:21:00.354806 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA9akWZs4kpvRr6POn959nBx6u9HYP6hsmSCzN0GTMCC) 2026-01-07 00:21:00.354826 | orchestrator | 2026-01-07 00:21:00.354839 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-07 00:21:00.354850 | orchestrator | Wednesday 07 January 2026 00:20:59 +0000 (0:00:00.974) 0:00:26.056 ***** 2026-01-07 00:21:00.354862 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:21:00.354873 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:21:00.354884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:21:00.354894 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:21:00.354925 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:21:00.354939 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:21:00.354952 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:21:00.354965 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:21:00.354978 | orchestrator | 2026-01-07 00:21:00.354991 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-07 00:21:00.355015 | orchestrator | Wednesday 07 January 2026 00:20:59 +0000 (0:00:00.153) 0:00:26.209 ***** 2026-01-07 00:21:00.355028 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:21:00.355040 | orchestrator | 2026-01-07 00:21:00.355053 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-07 00:21:00.355067 | orchestrator | Wednesday 07 January 2026 00:20:59 +0000 (0:00:00.054) 0:00:26.264 ***** 2026-01-07 00:21:00.355081 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:21:00.355093 | orchestrator | 2026-01-07 00:21:00.355106 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-07 00:21:00.355117 | orchestrator | Wednesday 07 January 2026 00:20:59 +0000 (0:00:00.053) 0:00:26.318 ***** 2026-01-07 00:21:00.355130 | orchestrator | changed: [testbed-manager] 2026-01-07 00:21:00.355143 | orchestrator | 2026-01-07 00:21:00.355156 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:21:00.355169 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:21:00.355183 | orchestrator | 2026-01-07 00:21:00.355197 | orchestrator | 2026-01-07 00:21:00.355210 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:21:00.355222 | orchestrator | Wednesday 07 January 2026 00:21:00 +0000 (0:00:00.691) 0:00:27.009 ***** 2026-01-07 00:21:00.355234 | orchestrator | =============================================================================== 2026-01-07 00:21:00.355247 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.61s 2026-01-07 00:21:00.355260 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.14s 2026-01-07 00:21:00.355272 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.00s 2026-01-07 00:21:00.355283 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-07 00:21:00.355293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-07 00:21:00.355304 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-07 00:21:00.355315 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-07 00:21:00.355326 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-07 00:21:00.355336 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-07 00:21:00.355347 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-07 00:21:00.355358 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-07 00:21:00.355368 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-01-07 00:21:00.355379 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-01-07 00:21:00.355390 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-01-07 00:21:00.355400 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-01-07 00:21:00.355411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-01-07 00:21:00.355429 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2026-01-07 00:21:00.355440 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-07 00:21:00.355451 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-01-07 00:21:00.355462 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2026-01-07 00:21:00.659715 | orchestrator | + osism apply squid 2026-01-07 00:21:12.604598 | orchestrator | 2026-01-07 00:21:12 | INFO  | Task 3572c42c-bc2e-4356-9b84-127704ba50c6 (squid) was prepared for execution. 2026-01-07 00:21:12.604738 | orchestrator | 2026-01-07 00:21:12 | INFO  | It takes a moment until task 3572c42c-bc2e-4356-9b84-127704ba50c6 (squid) has been started and output is visible here. 2026-01-07 00:23:10.045559 | orchestrator | 2026-01-07 00:23:10.045747 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-07 00:23:10.045769 | orchestrator | 2026-01-07 00:23:10.045780 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-07 00:23:10.045792 | orchestrator | Wednesday 07 January 2026 00:21:16 +0000 (0:00:00.139) 0:00:00.139 ***** 2026-01-07 00:23:10.045803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:23:10.045816 | orchestrator | 2026-01-07 00:23:10.045827 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-07 00:23:10.045838 | orchestrator | Wednesday 07 January 2026 00:21:16 +0000 (0:00:00.069) 0:00:00.209 ***** 2026-01-07 00:23:10.045849 | orchestrator | ok: [testbed-manager] 2026-01-07 00:23:10.045861 | orchestrator | 2026-01-07 00:23:10.045872 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-07 00:23:10.045886 | orchestrator | Wednesday 07 January 2026 00:21:17 +0000 (0:00:01.108) 0:00:01.317 ***** 2026-01-07 00:23:10.045907 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-07 00:23:10.045925 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-07 00:23:10.045944 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-07 00:23:10.045962 | orchestrator | 2026-01-07 00:23:10.045980 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-07 00:23:10.045999 | orchestrator | Wednesday 07 January 2026 00:21:18 +0000 (0:00:00.986) 0:00:02.303 ***** 2026-01-07 00:23:10.046099 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-07 00:23:10.046124 | orchestrator | 2026-01-07 00:23:10.046143 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-07 00:23:10.046191 | orchestrator | Wednesday 07 January 2026 00:21:19 +0000 (0:00:00.933) 0:00:03.237 ***** 2026-01-07 00:23:10.046215 | orchestrator | ok: [testbed-manager] 2026-01-07 00:23:10.046235 | orchestrator | 2026-01-07 00:23:10.046254 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-07 00:23:10.046275 | orchestrator | Wednesday 07 January 2026 00:21:19 +0000 (0:00:00.304) 0:00:03.541 ***** 2026-01-07 00:23:10.046294 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:10.046309 | orchestrator | 2026-01-07 00:23:10.046322 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-07 00:23:10.046335 | orchestrator | Wednesday 07 January 2026 00:21:20 +0000 (0:00:00.808) 0:00:04.350 ***** 2026-01-07 00:23:10.046348 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-07 00:23:10.046361 | orchestrator | ok: [testbed-manager] 2026-01-07 00:23:10.046374 | orchestrator | 2026-01-07 00:23:10.046386 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-07 00:23:10.046400 | orchestrator | Wednesday 07 January 2026 00:21:57 +0000 (0:00:36.490) 0:00:40.841 ***** 2026-01-07 00:23:10.046413 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:10.046426 | orchestrator | 2026-01-07 00:23:10.046439 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-07 00:23:10.046453 | orchestrator | Wednesday 07 January 2026 00:22:09 +0000 (0:00:11.948) 0:00:52.789 ***** 2026-01-07 00:23:10.046464 | orchestrator | Pausing for 60 seconds 2026-01-07 00:23:10.046475 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:10.046486 | orchestrator | 2026-01-07 00:23:10.046497 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-07 00:23:10.046508 | orchestrator | Wednesday 07 January 2026 00:23:09 +0000 (0:01:00.073) 0:01:52.863 ***** 2026-01-07 00:23:10.046518 | orchestrator | ok: [testbed-manager] 2026-01-07 00:23:10.046529 | orchestrator | 2026-01-07 00:23:10.046540 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-07 00:23:10.046551 | orchestrator | Wednesday 07 January 2026 00:23:09 +0000 (0:00:00.061) 0:01:52.924 ***** 2026-01-07 00:23:10.046589 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:10.046600 | orchestrator | 2026-01-07 00:23:10.046611 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:23:10.046622 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:23:10.046633 | orchestrator | 2026-01-07 00:23:10.046675 | orchestrator | 2026-01-07 00:23:10.046689 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:23:10.046700 | orchestrator | Wednesday 07 January 2026 00:23:09 +0000 (0:00:00.556) 0:01:53.480 ***** 2026-01-07 00:23:10.046711 | orchestrator | =============================================================================== 2026-01-07 00:23:10.046721 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-01-07 00:23:10.046732 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.49s 2026-01-07 00:23:10.046743 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.95s 2026-01-07 00:23:10.046753 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.11s 2026-01-07 00:23:10.046764 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.99s 2026-01-07 00:23:10.046774 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2026-01-07 00:23:10.046784 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-01-07 00:23:10.046795 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2026-01-07 00:23:10.046805 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.30s 2026-01-07 00:23:10.046816 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-01-07 00:23:10.046826 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-07 00:23:10.293205 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:23:10.293304 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-07 00:23:10.299784 | orchestrator | + set -e 2026-01-07 00:23:10.299867 | orchestrator | + NAMESPACE=kolla 2026-01-07 00:23:10.299892 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-07 00:23:10.303029 | orchestrator | ++ semver latest 9.0.0 2026-01-07 00:23:10.349783 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-07 00:23:10.349905 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-07 00:23:10.350451 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-07 00:23:22.383137 | orchestrator | 2026-01-07 00:23:22 | INFO  | Task 15965f36-106a-4c79-8596-2d2417ee105c (operator) was prepared for execution. 2026-01-07 00:23:22.383255 | orchestrator | 2026-01-07 00:23:22 | INFO  | It takes a moment until task 15965f36-106a-4c79-8596-2d2417ee105c (operator) has been started and output is visible here. 2026-01-07 00:23:37.840986 | orchestrator | 2026-01-07 00:23:37.841118 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-07 00:23:37.841134 | orchestrator | 2026-01-07 00:23:37.841145 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:23:37.841157 | orchestrator | Wednesday 07 January 2026 00:23:26 +0000 (0:00:00.135) 0:00:00.135 ***** 2026-01-07 00:23:37.841168 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:23:37.841182 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:23:37.841193 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:23:37.841204 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:23:37.841215 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:23:37.841225 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:23:37.841241 | orchestrator | 2026-01-07 00:23:37.841253 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-07 00:23:37.841264 | orchestrator | Wednesday 07 January 2026 00:23:29 +0000 (0:00:03.395) 0:00:03.530 ***** 2026-01-07 00:23:37.841275 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:23:37.841307 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:23:37.841319 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:23:37.841330 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:23:37.841341 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:23:37.841352 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:23:37.841362 | orchestrator | 2026-01-07 00:23:37.841373 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-07 00:23:37.841384 | orchestrator | 2026-01-07 00:23:37.841395 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:23:37.841407 | orchestrator | Wednesday 07 January 2026 00:23:30 +0000 (0:00:00.655) 0:00:04.185 ***** 2026-01-07 00:23:37.841417 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:23:37.841428 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:23:37.841439 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:23:37.841450 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:23:37.841461 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:23:37.841471 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:23:37.841482 | orchestrator | 2026-01-07 00:23:37.841493 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:23:37.841504 | orchestrator | Wednesday 07 January 2026 00:23:30 +0000 (0:00:00.122) 0:00:04.308 ***** 2026-01-07 00:23:37.841515 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:23:37.841525 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:23:37.841536 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:23:37.841547 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:23:37.841558 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:23:37.841568 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:23:37.841582 | orchestrator | 2026-01-07 00:23:37.841600 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:23:37.841619 | orchestrator | Wednesday 07 January 2026 00:23:30 +0000 (0:00:00.124) 0:00:04.432 ***** 2026-01-07 00:23:37.841637 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:37.841687 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:37.841706 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:37.841724 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:37.841742 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:37.841759 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:37.841778 | orchestrator | 2026-01-07 00:23:37.841810 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:23:37.841828 | orchestrator | Wednesday 07 January 2026 00:23:31 +0000 (0:00:00.632) 0:00:05.065 ***** 2026-01-07 00:23:37.841846 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:37.841863 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:37.841882 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:37.841898 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:37.841915 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:37.841931 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:37.841946 | orchestrator | 2026-01-07 00:23:37.841963 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:23:37.841978 | orchestrator | Wednesday 07 January 2026 00:23:32 +0000 (0:00:00.767) 0:00:05.832 ***** 2026-01-07 00:23:37.841996 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-07 00:23:37.842103 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-07 00:23:37.842130 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-07 00:23:37.842149 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-07 00:23:37.842170 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-07 00:23:37.842191 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-07 00:23:37.842210 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-07 00:23:37.842230 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-07 00:23:37.842249 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-07 00:23:37.842268 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-07 00:23:37.842288 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-07 00:23:37.842327 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-07 00:23:37.842347 | orchestrator | 2026-01-07 00:23:37.842366 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:23:37.842386 | orchestrator | Wednesday 07 January 2026 00:23:33 +0000 (0:00:01.175) 0:00:07.007 ***** 2026-01-07 00:23:37.842406 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:37.842425 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:37.842445 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:37.842464 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:37.842482 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:37.842501 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:37.842519 | orchestrator | 2026-01-07 00:23:37.842537 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:23:37.842558 | orchestrator | Wednesday 07 January 2026 00:23:34 +0000 (0:00:01.200) 0:00:08.208 ***** 2026-01-07 00:23:37.842579 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-07 00:23:37.842598 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-07 00:23:37.842617 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-07 00:23:37.842637 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842714 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842735 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842753 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842772 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842790 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:23:37.842809 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842827 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842845 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842863 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842882 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842910 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-07 00:23:37.842928 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.842946 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.842962 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.842982 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.843000 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.843018 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:23:37.843036 | orchestrator | 2026-01-07 00:23:37.843053 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:23:37.843072 | orchestrator | Wednesday 07 January 2026 00:23:35 +0000 (0:00:01.394) 0:00:09.603 ***** 2026-01-07 00:23:37.843090 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:37.843108 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:37.843125 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:37.843143 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:37.843161 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:37.843178 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:37.843196 | orchestrator | 2026-01-07 00:23:37.843214 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:23:37.843231 | orchestrator | Wednesday 07 January 2026 00:23:35 +0000 (0:00:00.134) 0:00:09.737 ***** 2026-01-07 00:23:37.843249 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:37.843281 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:37.843300 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:37.843318 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:37.843335 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:37.843353 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:37.843371 | orchestrator | 2026-01-07 00:23:37.843389 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:23:37.843407 | orchestrator | Wednesday 07 January 2026 00:23:36 +0000 (0:00:00.157) 0:00:09.895 ***** 2026-01-07 00:23:37.843425 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:37.843443 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:37.843461 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:37.843478 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:37.843495 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:37.843512 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:37.843530 | orchestrator | 2026-01-07 00:23:37.843548 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:23:37.843566 | orchestrator | Wednesday 07 January 2026 00:23:36 +0000 (0:00:00.555) 0:00:10.450 ***** 2026-01-07 00:23:37.843583 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:37.843601 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:37.843619 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:37.843637 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:37.843681 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:37.843700 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:37.843718 | orchestrator | 2026-01-07 00:23:37.843736 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:23:37.843755 | orchestrator | Wednesday 07 January 2026 00:23:36 +0000 (0:00:00.146) 0:00:10.597 ***** 2026-01-07 00:23:37.843773 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:23:37.843791 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:37.843809 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:23:37.843827 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:23:37.843845 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:37.843863 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:37.843880 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:23:37.843898 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:23:37.843916 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:37.843933 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:37.843951 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:23:37.843969 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:37.843987 | orchestrator | 2026-01-07 00:23:37.844005 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:23:37.844023 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:00.746) 0:00:11.344 ***** 2026-01-07 00:23:37.844041 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:37.844058 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:37.844076 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:37.844094 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:37.844112 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:37.844129 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:37.844147 | orchestrator | 2026-01-07 00:23:37.844164 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:23:37.844182 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:00.140) 0:00:11.485 ***** 2026-01-07 00:23:37.844199 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:37.844215 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:37.844231 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:37.844248 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:37.844282 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:39.062308 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:39.062545 | orchestrator | 2026-01-07 00:23:39.062579 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:23:39.062603 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:00.127) 0:00:11.612 ***** 2026-01-07 00:23:39.062624 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:39.062677 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:39.062698 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:39.062715 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:39.062734 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:39.062753 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:39.062771 | orchestrator | 2026-01-07 00:23:39.062790 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:23:39.062808 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:00.126) 0:00:11.738 ***** 2026-01-07 00:23:39.062827 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:23:39.062845 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:23:39.062865 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:23:39.062884 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:23:39.062903 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:23:39.062921 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:23:39.062940 | orchestrator | 2026-01-07 00:23:39.062959 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:23:39.062978 | orchestrator | Wednesday 07 January 2026 00:23:38 +0000 (0:00:00.662) 0:00:12.400 ***** 2026-01-07 00:23:39.062996 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:23:39.063015 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:23:39.063033 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:23:39.063052 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:23:39.063070 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:23:39.063089 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:23:39.063106 | orchestrator | 2026-01-07 00:23:39.063126 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:23:39.063147 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063168 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063187 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063206 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063253 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063273 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:23:39.063291 | orchestrator | 2026-01-07 00:23:39.063310 | orchestrator | 2026-01-07 00:23:39.063328 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:23:39.063347 | orchestrator | Wednesday 07 January 2026 00:23:38 +0000 (0:00:00.210) 0:00:12.611 ***** 2026-01-07 00:23:39.063366 | orchestrator | =============================================================================== 2026-01-07 00:23:39.063385 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2026-01-07 00:23:39.063404 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2026-01-07 00:23:39.063423 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-01-07 00:23:39.063442 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-01-07 00:23:39.063460 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-01-07 00:23:39.063491 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-01-07 00:23:39.063510 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-01-07 00:23:39.063528 | orchestrator | Do not require tty for all users ---------------------------------------- 0.66s 2026-01-07 00:23:39.063547 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-01-07 00:23:39.063565 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-01-07 00:23:39.063583 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-01-07 00:23:39.063607 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-01-07 00:23:39.063626 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-01-07 00:23:39.063666 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-01-07 00:23:39.063686 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-01-07 00:23:39.063705 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-01-07 00:23:39.063723 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-01-07 00:23:39.063742 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.12s 2026-01-07 00:23:39.063761 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.12s 2026-01-07 00:23:39.323927 | orchestrator | + osism apply --environment custom facts 2026-01-07 00:23:41.182300 | orchestrator | 2026-01-07 00:23:41 | INFO  | Trying to run play facts in environment custom 2026-01-07 00:23:51.275061 | orchestrator | 2026-01-07 00:23:51 | INFO  | Task 08280c9c-a238-4e20-ad73-d231c13927a8 (facts) was prepared for execution. 2026-01-07 00:23:51.275156 | orchestrator | 2026-01-07 00:23:51 | INFO  | It takes a moment until task 08280c9c-a238-4e20-ad73-d231c13927a8 (facts) has been started and output is visible here. 2026-01-07 00:24:33.884958 | orchestrator | 2026-01-07 00:24:33.885108 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-07 00:24:33.885128 | orchestrator | 2026-01-07 00:24:33.885141 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:24:33.885169 | orchestrator | Wednesday 07 January 2026 00:23:55 +0000 (0:00:00.059) 0:00:00.059 ***** 2026-01-07 00:24:33.885181 | orchestrator | ok: [testbed-manager] 2026-01-07 00:24:33.885194 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:24:33.885206 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.885217 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:24:33.885228 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:24:33.885239 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.885249 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.885260 | orchestrator | 2026-01-07 00:24:33.885271 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-07 00:24:33.885282 | orchestrator | Wednesday 07 January 2026 00:23:56 +0000 (0:00:01.312) 0:00:01.371 ***** 2026-01-07 00:24:33.885293 | orchestrator | ok: [testbed-manager] 2026-01-07 00:24:33.885304 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.885314 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.885325 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.885336 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:24:33.885348 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:24:33.885359 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:24:33.885369 | orchestrator | 2026-01-07 00:24:33.885381 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-07 00:24:33.885392 | orchestrator | 2026-01-07 00:24:33.885403 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:24:33.885413 | orchestrator | Wednesday 07 January 2026 00:23:57 +0000 (0:00:01.087) 0:00:02.458 ***** 2026-01-07 00:24:33.885447 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.885458 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.885469 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.885480 | orchestrator | 2026-01-07 00:24:33.885491 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:24:33.885505 | orchestrator | Wednesday 07 January 2026 00:23:57 +0000 (0:00:00.067) 0:00:02.526 ***** 2026-01-07 00:24:33.885518 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.885531 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.885543 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.885555 | orchestrator | 2026-01-07 00:24:33.885567 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:24:33.885579 | orchestrator | Wednesday 07 January 2026 00:23:57 +0000 (0:00:00.168) 0:00:02.694 ***** 2026-01-07 00:24:33.885591 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.885603 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.885616 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.885628 | orchestrator | 2026-01-07 00:24:33.885661 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:24:33.885674 | orchestrator | Wednesday 07 January 2026 00:23:57 +0000 (0:00:00.170) 0:00:02.865 ***** 2026-01-07 00:24:33.885688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:24:33.885701 | orchestrator | 2026-01-07 00:24:33.885714 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:24:33.885726 | orchestrator | Wednesday 07 January 2026 00:23:58 +0000 (0:00:00.117) 0:00:02.983 ***** 2026-01-07 00:24:33.885738 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.885752 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.885764 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.885777 | orchestrator | 2026-01-07 00:24:33.885789 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:24:33.885802 | orchestrator | Wednesday 07 January 2026 00:23:58 +0000 (0:00:00.413) 0:00:03.397 ***** 2026-01-07 00:24:33.885815 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:24:33.885829 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:24:33.885842 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:24:33.885853 | orchestrator | 2026-01-07 00:24:33.885864 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:24:33.885875 | orchestrator | Wednesday 07 January 2026 00:23:58 +0000 (0:00:00.102) 0:00:03.499 ***** 2026-01-07 00:24:33.885886 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.885897 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.885907 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.885918 | orchestrator | 2026-01-07 00:24:33.885929 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:24:33.885940 | orchestrator | Wednesday 07 January 2026 00:23:59 +0000 (0:00:00.977) 0:00:04.476 ***** 2026-01-07 00:24:33.885951 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.885962 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.885972 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.885983 | orchestrator | 2026-01-07 00:24:33.885994 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:24:33.886005 | orchestrator | Wednesday 07 January 2026 00:24:00 +0000 (0:00:00.436) 0:00:04.913 ***** 2026-01-07 00:24:33.886085 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.886099 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.886110 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.886121 | orchestrator | 2026-01-07 00:24:33.886132 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:24:33.886143 | orchestrator | Wednesday 07 January 2026 00:24:00 +0000 (0:00:00.993) 0:00:05.907 ***** 2026-01-07 00:24:33.886154 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.886215 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.886227 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.886237 | orchestrator | 2026-01-07 00:24:33.886248 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-07 00:24:33.886259 | orchestrator | Wednesday 07 January 2026 00:24:16 +0000 (0:00:15.603) 0:00:21.510 ***** 2026-01-07 00:24:33.886270 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:24:33.886281 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:24:33.886292 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:24:33.886303 | orchestrator | 2026-01-07 00:24:33.886314 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-07 00:24:33.886343 | orchestrator | Wednesday 07 January 2026 00:24:16 +0000 (0:00:00.103) 0:00:21.614 ***** 2026-01-07 00:24:33.886355 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:24:33.886366 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:24:33.886377 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:24:33.886388 | orchestrator | 2026-01-07 00:24:33.886399 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:24:33.886410 | orchestrator | Wednesday 07 January 2026 00:24:24 +0000 (0:00:07.350) 0:00:28.964 ***** 2026-01-07 00:24:33.886420 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.886431 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.886442 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.886453 | orchestrator | 2026-01-07 00:24:33.886464 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:24:33.886474 | orchestrator | Wednesday 07 January 2026 00:24:24 +0000 (0:00:00.476) 0:00:29.440 ***** 2026-01-07 00:24:33.886485 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-07 00:24:33.886497 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-07 00:24:33.886508 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-07 00:24:33.886519 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-07 00:24:33.886529 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-07 00:24:33.886540 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-07 00:24:33.886551 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-07 00:24:33.886562 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-07 00:24:33.886572 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-07 00:24:33.886583 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:24:33.886594 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:24:33.886605 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:24:33.886615 | orchestrator | 2026-01-07 00:24:33.886626 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:24:33.886682 | orchestrator | Wednesday 07 January 2026 00:24:28 +0000 (0:00:03.499) 0:00:32.940 ***** 2026-01-07 00:24:33.886693 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.886704 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.886715 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.886725 | orchestrator | 2026-01-07 00:24:33.886736 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:24:33.886747 | orchestrator | 2026-01-07 00:24:33.886758 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:24:33.886769 | orchestrator | Wednesday 07 January 2026 00:24:29 +0000 (0:00:01.290) 0:00:34.231 ***** 2026-01-07 00:24:33.886780 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:24:33.886790 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:24:33.886801 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:24:33.886811 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:24:33.886822 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:24:33.886833 | orchestrator | ok: [testbed-manager] 2026-01-07 00:24:33.886851 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:24:33.886862 | orchestrator | 2026-01-07 00:24:33.886873 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:24:33.886885 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:24:33.886896 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:24:33.886909 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:24:33.886961 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:24:33.886974 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:24:33.886985 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:24:33.886996 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:24:33.887006 | orchestrator | 2026-01-07 00:24:33.887017 | orchestrator | 2026-01-07 00:24:33.887028 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:24:33.887039 | orchestrator | Wednesday 07 January 2026 00:24:33 +0000 (0:00:04.541) 0:00:38.773 ***** 2026-01-07 00:24:33.887050 | orchestrator | =============================================================================== 2026-01-07 00:24:33.887061 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.60s 2026-01-07 00:24:33.887072 | orchestrator | Install required packages (Debian) -------------------------------------- 7.35s 2026-01-07 00:24:33.887083 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2026-01-07 00:24:33.887093 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2026-01-07 00:24:33.887104 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2026-01-07 00:24:33.887115 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2026-01-07 00:24:33.887133 | orchestrator | Copy fact file ---------------------------------------------------------- 1.09s 2026-01-07 00:24:34.081118 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.99s 2026-01-07 00:24:34.081220 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2026-01-07 00:24:34.081255 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-01-07 00:24:34.081268 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-01-07 00:24:34.081279 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-01-07 00:24:34.081290 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2026-01-07 00:24:34.081301 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-01-07 00:24:34.081312 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-01-07 00:24:34.081324 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-07 00:24:34.081335 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-01-07 00:24:34.081346 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-01-07 00:24:34.342087 | orchestrator | + osism apply bootstrap 2026-01-07 00:24:46.362396 | orchestrator | 2026-01-07 00:24:46 | INFO  | Task b49843bb-b884-4fb1-865f-85596a52ef67 (bootstrap) was prepared for execution. 2026-01-07 00:24:46.362581 | orchestrator | 2026-01-07 00:24:46 | INFO  | It takes a moment until task b49843bb-b884-4fb1-865f-85596a52ef67 (bootstrap) has been started and output is visible here. 2026-01-07 00:25:01.943759 | orchestrator | 2026-01-07 00:25:01.943898 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:25:01.943912 | orchestrator | 2026-01-07 00:25:01.943921 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:25:01.943929 | orchestrator | Wednesday 07 January 2026 00:24:50 +0000 (0:00:00.109) 0:00:00.109 ***** 2026-01-07 00:25:01.943936 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:01.943945 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:01.943952 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:01.943959 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:01.943967 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:01.943974 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:01.943981 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:01.943988 | orchestrator | 2026-01-07 00:25:01.943996 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:25:01.944003 | orchestrator | 2026-01-07 00:25:01.944009 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:25:01.944016 | orchestrator | Wednesday 07 January 2026 00:24:50 +0000 (0:00:00.155) 0:00:00.265 ***** 2026-01-07 00:25:01.944022 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:01.944030 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:01.944037 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:01.944045 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:01.944052 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:01.944058 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:01.944065 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:01.944073 | orchestrator | 2026-01-07 00:25:01.944080 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-07 00:25:01.944088 | orchestrator | 2026-01-07 00:25:01.944095 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:25:01.944103 | orchestrator | Wednesday 07 January 2026 00:24:53 +0000 (0:00:03.518) 0:00:03.783 ***** 2026-01-07 00:25:01.944111 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:25:01.944119 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:25:01.944126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-07 00:25:01.944133 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:25:01.944141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:25:01.944148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:25:01.944155 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-07 00:25:01.944163 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:25:01.944169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:25:01.944176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-07 00:25:01.944184 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-07 00:25:01.944191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:25:01.944198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-07 00:25:01.944205 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:25:01.944211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-07 00:25:01.944218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-07 00:25:01.944225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:25:01.944232 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:25:01.944240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-07 00:25:01.944246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-07 00:25:01.944298 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:25:01.944307 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:01.944314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:25:01.944322 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-07 00:25:01.944329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:25:01.944336 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:01.944343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-07 00:25:01.944351 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:25:01.944359 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-07 00:25:01.944384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-07 00:25:01.944392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:25:01.944399 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:01.944406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-07 00:25:01.944413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-07 00:25:01.944420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:25:01.944427 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-07 00:25:01.944434 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:25:01.944440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-07 00:25:01.944448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:25:01.944454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:25:01.944461 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-07 00:25:01.944467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-07 00:25:01.944473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:25:01.944480 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:01.944487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:25:01.944493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:01.944499 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-07 00:25:01.944526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:25:01.944532 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-07 00:25:01.944539 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:25:01.944545 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:25:01.944552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:25:01.944558 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:01.944568 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:25:01.944575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:25:01.944582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:01.944588 | orchestrator | 2026-01-07 00:25:01.944595 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-07 00:25:01.944602 | orchestrator | 2026-01-07 00:25:01.944608 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-07 00:25:01.944615 | orchestrator | Wednesday 07 January 2026 00:24:54 +0000 (0:00:00.385) 0:00:04.169 ***** 2026-01-07 00:25:01.944621 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:01.944628 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:01.944674 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:01.944680 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:01.944685 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:01.944691 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:01.944697 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:01.944702 | orchestrator | 2026-01-07 00:25:01.944709 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-07 00:25:01.944727 | orchestrator | Wednesday 07 January 2026 00:24:56 +0000 (0:00:02.042) 0:00:06.211 ***** 2026-01-07 00:25:01.944733 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:01.944740 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:01.944746 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:01.944752 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:01.944758 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:01.944765 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:01.944771 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:01.944778 | orchestrator | 2026-01-07 00:25:01.944784 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-07 00:25:01.944791 | orchestrator | Wednesday 07 January 2026 00:24:57 +0000 (0:00:01.198) 0:00:07.409 ***** 2026-01-07 00:25:01.944798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:01.944807 | orchestrator | 2026-01-07 00:25:01.944814 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-07 00:25:01.944821 | orchestrator | Wednesday 07 January 2026 00:24:57 +0000 (0:00:00.238) 0:00:07.648 ***** 2026-01-07 00:25:01.944827 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:01.944833 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:01.944840 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:01.944846 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:01.944852 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:01.944857 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:01.944863 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:01.944868 | orchestrator | 2026-01-07 00:25:01.944874 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-07 00:25:01.944881 | orchestrator | Wednesday 07 January 2026 00:24:59 +0000 (0:00:01.964) 0:00:09.613 ***** 2026-01-07 00:25:01.944887 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:01.944896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:01.944905 | orchestrator | 2026-01-07 00:25:01.944912 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-07 00:25:01.944918 | orchestrator | Wednesday 07 January 2026 00:24:59 +0000 (0:00:00.228) 0:00:09.841 ***** 2026-01-07 00:25:01.944924 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:01.944931 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:01.944937 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:01.944943 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:01.944949 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:01.944955 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:01.944960 | orchestrator | 2026-01-07 00:25:01.944968 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-07 00:25:01.944972 | orchestrator | Wednesday 07 January 2026 00:25:00 +0000 (0:00:00.956) 0:00:10.798 ***** 2026-01-07 00:25:01.944976 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:01.944980 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:01.944984 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:01.944988 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:01.944991 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:01.944995 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:01.944999 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:01.945003 | orchestrator | 2026-01-07 00:25:01.945007 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-07 00:25:01.945011 | orchestrator | Wednesday 07 January 2026 00:25:01 +0000 (0:00:00.604) 0:00:11.403 ***** 2026-01-07 00:25:01.945015 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:01.945022 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:01.945035 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:01.945042 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:01.945047 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:01.945054 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:01.945070 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:01.945077 | orchestrator | 2026-01-07 00:25:01.945084 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:25:01.945092 | orchestrator | Wednesday 07 January 2026 00:25:01 +0000 (0:00:00.454) 0:00:11.857 ***** 2026-01-07 00:25:01.945099 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:01.945105 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:01.945122 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:13.321437 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:13.321562 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:13.321578 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:13.321589 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:13.321601 | orchestrator | 2026-01-07 00:25:13.321616 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:25:13.321720 | orchestrator | Wednesday 07 January 2026 00:25:02 +0000 (0:00:00.205) 0:00:12.063 ***** 2026-01-07 00:25:13.321739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:13.321779 | orchestrator | 2026-01-07 00:25:13.321796 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:25:13.321808 | orchestrator | Wednesday 07 January 2026 00:25:02 +0000 (0:00:00.265) 0:00:12.329 ***** 2026-01-07 00:25:13.321820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:13.321831 | orchestrator | 2026-01-07 00:25:13.321849 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:25:13.321868 | orchestrator | Wednesday 07 January 2026 00:25:02 +0000 (0:00:00.263) 0:00:12.593 ***** 2026-01-07 00:25:13.321887 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.321907 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.321919 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.321929 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.321940 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.321957 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.321977 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.321996 | orchestrator | 2026-01-07 00:25:13.322088 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:25:13.322105 | orchestrator | Wednesday 07 January 2026 00:25:03 +0000 (0:00:01.321) 0:00:13.914 ***** 2026-01-07 00:25:13.322117 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:13.322129 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:13.322151 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:13.322172 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:13.322192 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:13.322212 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:13.322227 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:13.322239 | orchestrator | 2026-01-07 00:25:13.322250 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:25:13.322262 | orchestrator | Wednesday 07 January 2026 00:25:04 +0000 (0:00:00.217) 0:00:14.132 ***** 2026-01-07 00:25:13.322278 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.322298 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.322316 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.322334 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.322345 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.322380 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.322391 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.322402 | orchestrator | 2026-01-07 00:25:13.322413 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:25:13.322424 | orchestrator | Wednesday 07 January 2026 00:25:04 +0000 (0:00:00.534) 0:00:14.666 ***** 2026-01-07 00:25:13.322435 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:13.322446 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:13.322456 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:13.322467 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:13.322477 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:13.322488 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:13.322499 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:13.322509 | orchestrator | 2026-01-07 00:25:13.322521 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:25:13.322533 | orchestrator | Wednesday 07 January 2026 00:25:04 +0000 (0:00:00.298) 0:00:14.964 ***** 2026-01-07 00:25:13.322544 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.322555 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:13.322565 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:13.322576 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:13.322587 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:13.322606 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:13.322617 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:13.322651 | orchestrator | 2026-01-07 00:25:13.322664 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:25:13.322675 | orchestrator | Wednesday 07 January 2026 00:25:05 +0000 (0:00:00.520) 0:00:15.485 ***** 2026-01-07 00:25:13.322687 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.322697 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:13.322708 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:13.322719 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:13.322729 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:13.322740 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:13.322750 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:13.322761 | orchestrator | 2026-01-07 00:25:13.322772 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:25:13.322783 | orchestrator | Wednesday 07 January 2026 00:25:06 +0000 (0:00:01.048) 0:00:16.534 ***** 2026-01-07 00:25:13.322793 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.322804 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.322815 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.322826 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.322837 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.322848 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.322858 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.322869 | orchestrator | 2026-01-07 00:25:13.322880 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:25:13.322891 | orchestrator | Wednesday 07 January 2026 00:25:07 +0000 (0:00:01.002) 0:00:17.536 ***** 2026-01-07 00:25:13.322924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:13.322936 | orchestrator | 2026-01-07 00:25:13.322947 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:25:13.322958 | orchestrator | Wednesday 07 January 2026 00:25:07 +0000 (0:00:00.273) 0:00:17.809 ***** 2026-01-07 00:25:13.322968 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:13.322979 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:13.322990 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:13.323001 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:13.323011 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:13.323022 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:13.323045 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:13.323056 | orchestrator | 2026-01-07 00:25:13.323067 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:25:13.323078 | orchestrator | Wednesday 07 January 2026 00:25:09 +0000 (0:00:01.232) 0:00:19.041 ***** 2026-01-07 00:25:13.323088 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323099 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323110 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323121 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323131 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.323142 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.323152 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.323163 | orchestrator | 2026-01-07 00:25:13.323174 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:25:13.323185 | orchestrator | Wednesday 07 January 2026 00:25:09 +0000 (0:00:00.212) 0:00:19.253 ***** 2026-01-07 00:25:13.323196 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323215 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323234 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323252 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323271 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.323290 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.323309 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.323327 | orchestrator | 2026-01-07 00:25:13.323343 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:25:13.323355 | orchestrator | Wednesday 07 January 2026 00:25:09 +0000 (0:00:00.214) 0:00:19.468 ***** 2026-01-07 00:25:13.323366 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323377 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323387 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323398 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323409 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.323419 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.323430 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.323440 | orchestrator | 2026-01-07 00:25:13.323451 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:25:13.323462 | orchestrator | Wednesday 07 January 2026 00:25:09 +0000 (0:00:00.184) 0:00:19.653 ***** 2026-01-07 00:25:13.323474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:13.323487 | orchestrator | 2026-01-07 00:25:13.323498 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:25:13.323509 | orchestrator | Wednesday 07 January 2026 00:25:09 +0000 (0:00:00.252) 0:00:19.906 ***** 2026-01-07 00:25:13.323519 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323530 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323541 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323551 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323562 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.323573 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.323583 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.323594 | orchestrator | 2026-01-07 00:25:13.323605 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:25:13.323616 | orchestrator | Wednesday 07 January 2026 00:25:10 +0000 (0:00:00.536) 0:00:20.443 ***** 2026-01-07 00:25:13.323627 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:13.323662 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:13.323673 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:13.323684 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:13.323695 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:13.323705 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:13.323723 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:13.323734 | orchestrator | 2026-01-07 00:25:13.323753 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:25:13.323764 | orchestrator | Wednesday 07 January 2026 00:25:10 +0000 (0:00:00.200) 0:00:20.644 ***** 2026-01-07 00:25:13.323775 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323786 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323797 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323807 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323818 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:13.323829 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:13.323839 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:13.323850 | orchestrator | 2026-01-07 00:25:13.323863 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:25:13.323882 | orchestrator | Wednesday 07 January 2026 00:25:11 +0000 (0:00:00.999) 0:00:21.643 ***** 2026-01-07 00:25:13.323894 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.323905 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.323916 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.323927 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.323938 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:13.323948 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:13.323959 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:13.323969 | orchestrator | 2026-01-07 00:25:13.323980 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:25:13.323991 | orchestrator | Wednesday 07 January 2026 00:25:12 +0000 (0:00:00.604) 0:00:22.248 ***** 2026-01-07 00:25:13.324002 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:13.324013 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:13.324024 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:13.324035 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:13.324055 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.246173 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.246270 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.246280 | orchestrator | 2026-01-07 00:25:51.246288 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:25:51.246297 | orchestrator | Wednesday 07 January 2026 00:25:13 +0000 (0:00:01.087) 0:00:23.336 ***** 2026-01-07 00:25:51.246303 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246311 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246317 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246324 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.246330 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.246336 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.246343 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.246349 | orchestrator | 2026-01-07 00:25:51.246356 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-07 00:25:51.246362 | orchestrator | Wednesday 07 January 2026 00:25:29 +0000 (0:00:16.240) 0:00:39.576 ***** 2026-01-07 00:25:51.246369 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.246375 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246382 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246389 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246395 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.246401 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.246407 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.246413 | orchestrator | 2026-01-07 00:25:51.246420 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-07 00:25:51.246426 | orchestrator | Wednesday 07 January 2026 00:25:29 +0000 (0:00:00.188) 0:00:39.765 ***** 2026-01-07 00:25:51.246432 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.246438 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246444 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246451 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246457 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.246463 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.246469 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.246493 | orchestrator | 2026-01-07 00:25:51.246499 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-07 00:25:51.246506 | orchestrator | Wednesday 07 January 2026 00:25:29 +0000 (0:00:00.188) 0:00:39.953 ***** 2026-01-07 00:25:51.246512 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.246518 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246524 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246530 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246537 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.246543 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.246549 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.246555 | orchestrator | 2026-01-07 00:25:51.246561 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-07 00:25:51.246568 | orchestrator | Wednesday 07 January 2026 00:25:30 +0000 (0:00:00.190) 0:00:40.143 ***** 2026-01-07 00:25:51.246575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:51.246583 | orchestrator | 2026-01-07 00:25:51.246589 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-07 00:25:51.246596 | orchestrator | Wednesday 07 January 2026 00:25:30 +0000 (0:00:00.249) 0:00:40.393 ***** 2026-01-07 00:25:51.246602 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.246608 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246614 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246620 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246691 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.246699 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.246705 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.246711 | orchestrator | 2026-01-07 00:25:51.246718 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-07 00:25:51.246724 | orchestrator | Wednesday 07 January 2026 00:25:31 +0000 (0:00:01.501) 0:00:41.895 ***** 2026-01-07 00:25:51.246730 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.246738 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:51.246745 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:51.246753 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:51.246760 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.246767 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.246774 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.246781 | orchestrator | 2026-01-07 00:25:51.246789 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-07 00:25:51.246796 | orchestrator | Wednesday 07 January 2026 00:25:32 +0000 (0:00:01.062) 0:00:42.957 ***** 2026-01-07 00:25:51.246805 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.246812 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.246819 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.246826 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.246834 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.246841 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.246848 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.246855 | orchestrator | 2026-01-07 00:25:51.246877 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-07 00:25:51.246886 | orchestrator | Wednesday 07 January 2026 00:25:33 +0000 (0:00:00.797) 0:00:43.755 ***** 2026-01-07 00:25:51.246894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:51.246903 | orchestrator | 2026-01-07 00:25:51.246910 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-07 00:25:51.246918 | orchestrator | Wednesday 07 January 2026 00:25:33 +0000 (0:00:00.247) 0:00:44.003 ***** 2026-01-07 00:25:51.246925 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.246939 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:51.246946 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:51.246953 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:51.246961 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.246968 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.246975 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.246982 | orchestrator | 2026-01-07 00:25:51.247002 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-07 00:25:51.247010 | orchestrator | Wednesday 07 January 2026 00:25:34 +0000 (0:00:00.971) 0:00:44.975 ***** 2026-01-07 00:25:51.247017 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:25:51.247024 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:25:51.247031 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:25:51.247038 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:25:51.247046 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:25:51.247052 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:25:51.247059 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:25:51.247067 | orchestrator | 2026-01-07 00:25:51.247075 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-07 00:25:51.247082 | orchestrator | Wednesday 07 January 2026 00:25:35 +0000 (0:00:00.196) 0:00:45.171 ***** 2026-01-07 00:25:51.247089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:51.247097 | orchestrator | 2026-01-07 00:25:51.247103 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-07 00:25:51.247109 | orchestrator | Wednesday 07 January 2026 00:25:35 +0000 (0:00:00.297) 0:00:45.469 ***** 2026-01-07 00:25:51.247115 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.247121 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.247127 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.247134 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.247140 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.247146 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.247152 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.247158 | orchestrator | 2026-01-07 00:25:51.247164 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-07 00:25:51.247170 | orchestrator | Wednesday 07 January 2026 00:25:37 +0000 (0:00:01.712) 0:00:47.181 ***** 2026-01-07 00:25:51.247176 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.247183 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:51.247189 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:51.247195 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:51.247201 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.247207 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.247213 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.247219 | orchestrator | 2026-01-07 00:25:51.247225 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-07 00:25:51.247232 | orchestrator | Wednesday 07 January 2026 00:25:38 +0000 (0:00:01.166) 0:00:48.348 ***** 2026-01-07 00:25:51.247238 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:25:51.247244 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:25:51.247250 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:25:51.247256 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:25:51.247262 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:25:51.247268 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:25:51.247274 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.247281 | orchestrator | 2026-01-07 00:25:51.247287 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-07 00:25:51.247293 | orchestrator | Wednesday 07 January 2026 00:25:48 +0000 (0:00:10.520) 0:00:58.868 ***** 2026-01-07 00:25:51.247299 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.247305 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.247316 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.247322 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.247328 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.247334 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.247340 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.247346 | orchestrator | 2026-01-07 00:25:51.247353 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-07 00:25:51.247359 | orchestrator | Wednesday 07 January 2026 00:25:49 +0000 (0:00:00.822) 0:00:59.691 ***** 2026-01-07 00:25:51.247365 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.247371 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.247377 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.247383 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.247389 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.247395 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.247401 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.247407 | orchestrator | 2026-01-07 00:25:51.247414 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-07 00:25:51.247423 | orchestrator | Wednesday 07 January 2026 00:25:50 +0000 (0:00:00.852) 0:01:00.543 ***** 2026-01-07 00:25:51.247429 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.247436 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.247442 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.247448 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.247454 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.247460 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.247466 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.247472 | orchestrator | 2026-01-07 00:25:51.247478 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-07 00:25:51.247485 | orchestrator | Wednesday 07 January 2026 00:25:50 +0000 (0:00:00.215) 0:01:00.759 ***** 2026-01-07 00:25:51.247491 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.247497 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:25:51.247503 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:25:51.247509 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:25:51.247515 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:25:51.247521 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:25:51.247527 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:25:51.247533 | orchestrator | 2026-01-07 00:25:51.247539 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-07 00:25:51.247546 | orchestrator | Wednesday 07 January 2026 00:25:50 +0000 (0:00:00.229) 0:01:00.989 ***** 2026-01-07 00:25:51.247552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:25:51.247559 | orchestrator | 2026-01-07 00:25:51.247569 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-07 00:28:07.091458 | orchestrator | Wednesday 07 January 2026 00:25:51 +0000 (0:00:00.280) 0:01:01.270 ***** 2026-01-07 00:28:07.091574 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.091590 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.091602 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.091613 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.091671 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.091683 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.091694 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.091705 | orchestrator | 2026-01-07 00:28:07.091717 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-07 00:28:07.091729 | orchestrator | Wednesday 07 January 2026 00:25:52 +0000 (0:00:01.725) 0:01:02.995 ***** 2026-01-07 00:28:07.091740 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:07.091753 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:07.091764 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:07.091775 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:07.091815 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:07.091834 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:07.091854 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:07.091873 | orchestrator | 2026-01-07 00:28:07.091893 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-07 00:28:07.091913 | orchestrator | Wednesday 07 January 2026 00:25:53 +0000 (0:00:00.549) 0:01:03.545 ***** 2026-01-07 00:28:07.091935 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.091954 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.091974 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.091994 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092013 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092027 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092044 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092062 | orchestrator | 2026-01-07 00:28:07.092080 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-07 00:28:07.092098 | orchestrator | Wednesday 07 January 2026 00:25:53 +0000 (0:00:00.199) 0:01:03.744 ***** 2026-01-07 00:28:07.092116 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.092135 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.092155 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.092168 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092186 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092198 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092208 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092219 | orchestrator | 2026-01-07 00:28:07.092230 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-07 00:28:07.092241 | orchestrator | Wednesday 07 January 2026 00:25:54 +0000 (0:00:01.165) 0:01:04.909 ***** 2026-01-07 00:28:07.092252 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:07.092262 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:07.092273 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:07.092284 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:07.092295 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:07.092308 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:07.092325 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:07.092336 | orchestrator | 2026-01-07 00:28:07.092347 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-07 00:28:07.092358 | orchestrator | Wednesday 07 January 2026 00:25:56 +0000 (0:00:01.681) 0:01:06.590 ***** 2026-01-07 00:28:07.092369 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.092380 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092391 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.092402 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092412 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092423 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.092434 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092444 | orchestrator | 2026-01-07 00:28:07.092455 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-07 00:28:07.092466 | orchestrator | Wednesday 07 January 2026 00:25:58 +0000 (0:00:02.341) 0:01:08.932 ***** 2026-01-07 00:28:07.092478 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.092488 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092499 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092510 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.092520 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092531 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.092542 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092552 | orchestrator | 2026-01-07 00:28:07.092563 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-07 00:28:07.092574 | orchestrator | Wednesday 07 January 2026 00:26:35 +0000 (0:00:36.896) 0:01:45.829 ***** 2026-01-07 00:28:07.092585 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:07.092596 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:07.092649 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:07.092673 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:07.092708 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:07.092722 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:07.092732 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:07.092750 | orchestrator | 2026-01-07 00:28:07.092763 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-07 00:28:07.092774 | orchestrator | Wednesday 07 January 2026 00:27:54 +0000 (0:01:18.801) 0:03:04.630 ***** 2026-01-07 00:28:07.092789 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:07.092805 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.092816 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092826 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.092837 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092848 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092859 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092869 | orchestrator | 2026-01-07 00:28:07.092881 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-07 00:28:07.092892 | orchestrator | Wednesday 07 January 2026 00:27:56 +0000 (0:00:01.749) 0:03:06.380 ***** 2026-01-07 00:28:07.092903 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:07.092914 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:07.092924 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:07.092935 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:07.092946 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:07.092957 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:07.092968 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:07.092979 | orchestrator | 2026-01-07 00:28:07.092989 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-07 00:28:07.093000 | orchestrator | Wednesday 07 January 2026 00:28:06 +0000 (0:00:09.711) 0:03:16.091 ***** 2026-01-07 00:28:07.093044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-07 00:28:07.093063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-07 00:28:07.093080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-07 00:28:07.093093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:28:07.093105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:28:07.093125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-07 00:28:07.093137 | orchestrator | 2026-01-07 00:28:07.093148 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-07 00:28:07.093164 | orchestrator | Wednesday 07 January 2026 00:28:06 +0000 (0:00:00.299) 0:03:16.391 ***** 2026-01-07 00:28:07.093176 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:28:07.093187 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:07.093198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:28:07.093209 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:28:07.093220 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:07.093231 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:07.093242 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:28:07.093253 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:07.093264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:28:07.093275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:28:07.093286 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:28:07.093297 | orchestrator | 2026-01-07 00:28:07.093308 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-07 00:28:07.093319 | orchestrator | Wednesday 07 January 2026 00:28:07 +0000 (0:00:00.658) 0:03:17.050 ***** 2026-01-07 00:28:07.093330 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:28:07.093342 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:28:07.093354 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:28:07.093365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:28:07.093376 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:28:07.093394 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:28:14.673282 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:28:14.673448 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:28:14.673463 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:28:14.673476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:28:14.673486 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:28:14.673497 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:28:14.673508 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:28:14.673518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:28:14.673529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:28:14.673539 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:28:14.673550 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:28:14.673585 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:28:14.673596 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:28:14.673606 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:28:14.673648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:28:14.673659 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:28:14.673668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:28:14.673678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:28:14.673687 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:28:14.673697 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:28:14.673707 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:14.673719 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:28:14.673729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:28:14.673738 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:28:14.673748 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:28:14.673757 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:28:14.673767 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:14.673777 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:28:14.673789 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:28:14.673801 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:28:14.673818 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:28:14.673829 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:28:14.673841 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:28:14.673852 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:28:14.673863 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:28:14.673874 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:28:14.673885 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:14.673897 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:14.673908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:28:14.673920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:28:14.673932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:28:14.673943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:28:14.673955 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:28:14.673984 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:28:14.673996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:28:14.674075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674088 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:28:14.674110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:28:14.674134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:28:14.674146 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:28:14.674155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:28:14.674175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:28:14.674184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:28:14.674194 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:28:14.674203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:28:14.674213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:28:14.674232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:28:14.674242 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:28:14.674252 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:28:14.674261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:28:14.674271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:28:14.674281 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:28:14.674290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:28:14.674300 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:28:14.674310 | orchestrator | 2026-01-07 00:28:14.674321 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-07 00:28:14.674330 | orchestrator | Wednesday 07 January 2026 00:28:12 +0000 (0:00:05.612) 0:03:22.662 ***** 2026-01-07 00:28:14.674340 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674359 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674369 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:28:14.674413 | orchestrator | 2026-01-07 00:28:14.674423 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-07 00:28:14.674439 | orchestrator | Wednesday 07 January 2026 00:28:13 +0000 (0:00:00.577) 0:03:23.240 ***** 2026-01-07 00:28:14.674449 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:14.674459 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:14.674476 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:14.674492 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:14.674511 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:14.674535 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:14.674553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:14.674569 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:14.674586 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:14.674603 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:14.674681 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:27.652138 | orchestrator | 2026-01-07 00:28:27.652292 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-07 00:28:27.652324 | orchestrator | Wednesday 07 January 2026 00:28:14 +0000 (0:00:01.457) 0:03:24.697 ***** 2026-01-07 00:28:27.652347 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:27.652370 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:27.652393 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:27.652415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:27.652436 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:27.652457 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:27.652473 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:28:27.652484 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:27.652496 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:27.652507 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:27.652519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:28:27.652530 | orchestrator | 2026-01-07 00:28:27.652542 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-07 00:28:27.652553 | orchestrator | Wednesday 07 January 2026 00:28:15 +0000 (0:00:00.578) 0:03:25.276 ***** 2026-01-07 00:28:27.652564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:28:27.652576 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:27.652586 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:28:27.652598 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:27.652609 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:28:27.652655 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:27.652669 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:28:27.652682 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:27.652695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:28:27.652732 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:28:27.652746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:28:27.652758 | orchestrator | 2026-01-07 00:28:27.652770 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-07 00:28:27.652783 | orchestrator | Wednesday 07 January 2026 00:28:15 +0000 (0:00:00.580) 0:03:25.857 ***** 2026-01-07 00:28:27.652795 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:27.652808 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:27.652821 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:27.652833 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:27.652846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:27.652858 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:27.652870 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:27.652883 | orchestrator | 2026-01-07 00:28:27.652896 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-07 00:28:27.652908 | orchestrator | Wednesday 07 January 2026 00:28:16 +0000 (0:00:00.283) 0:03:26.141 ***** 2026-01-07 00:28:27.652919 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:27.652931 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:27.652942 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:27.652953 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:27.652963 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:27.652974 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:27.652985 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:27.652995 | orchestrator | 2026-01-07 00:28:27.653007 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-07 00:28:27.653017 | orchestrator | Wednesday 07 January 2026 00:28:21 +0000 (0:00:05.533) 0:03:31.675 ***** 2026-01-07 00:28:27.653029 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-07 00:28:27.653040 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:27.653051 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-07 00:28:27.653062 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-07 00:28:27.653073 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:27.653084 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:27.653095 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-07 00:28:27.653106 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:27.653116 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-07 00:28:27.653127 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-07 00:28:27.653138 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:27.653149 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:27.653160 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-07 00:28:27.653171 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:27.653182 | orchestrator | 2026-01-07 00:28:27.653193 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-07 00:28:27.653204 | orchestrator | Wednesday 07 January 2026 00:28:21 +0000 (0:00:00.298) 0:03:31.973 ***** 2026-01-07 00:28:27.653215 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-07 00:28:27.653226 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-07 00:28:27.653238 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-07 00:28:27.653268 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-07 00:28:27.653280 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-07 00:28:27.653290 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-07 00:28:27.653301 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-07 00:28:27.653312 | orchestrator | 2026-01-07 00:28:27.653323 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-07 00:28:27.653334 | orchestrator | Wednesday 07 January 2026 00:28:22 +0000 (0:00:01.034) 0:03:33.008 ***** 2026-01-07 00:28:27.653346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:27.653369 | orchestrator | 2026-01-07 00:28:27.653380 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-07 00:28:27.653391 | orchestrator | Wednesday 07 January 2026 00:28:23 +0000 (0:00:00.462) 0:03:33.470 ***** 2026-01-07 00:28:27.653402 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:27.653418 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:27.653436 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:27.653455 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:27.653472 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:27.653490 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:27.653509 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:27.653528 | orchestrator | 2026-01-07 00:28:27.653548 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-07 00:28:27.653568 | orchestrator | Wednesday 07 January 2026 00:28:24 +0000 (0:00:01.237) 0:03:34.708 ***** 2026-01-07 00:28:27.653587 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:27.653600 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:27.653611 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:27.653650 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:27.653662 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:27.653673 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:27.653683 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:27.653694 | orchestrator | 2026-01-07 00:28:27.653705 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-07 00:28:27.653716 | orchestrator | Wednesday 07 January 2026 00:28:25 +0000 (0:00:00.663) 0:03:35.371 ***** 2026-01-07 00:28:27.653726 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:27.653737 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:27.653748 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:27.653758 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:27.653769 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:27.653780 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:27.653790 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:27.653801 | orchestrator | 2026-01-07 00:28:27.653812 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-07 00:28:27.653823 | orchestrator | Wednesday 07 January 2026 00:28:25 +0000 (0:00:00.615) 0:03:35.987 ***** 2026-01-07 00:28:27.653834 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:27.653844 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:27.653855 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:27.653949 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:27.653961 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:27.653972 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:27.653982 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:27.653994 | orchestrator | 2026-01-07 00:28:27.654005 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-07 00:28:27.654078 | orchestrator | Wednesday 07 January 2026 00:28:26 +0000 (0:00:00.694) 0:03:36.681 ***** 2026-01-07 00:28:27.654123 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744352.9998398, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:27.654140 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744361.3404393, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:27.654163 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744376.251451, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:27.654204 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744372.3786063, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416011 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744358.4070177, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416148 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744360.2438912, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416177 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744368.2987833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416192 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416220 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416255 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416267 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416309 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416322 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416333 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:28:32.416344 | orchestrator | 2026-01-07 00:28:32.416357 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-07 00:28:32.416369 | orchestrator | Wednesday 07 January 2026 00:28:27 +0000 (0:00:00.994) 0:03:37.676 ***** 2026-01-07 00:28:32.416380 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:32.416393 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:32.416403 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:32.416414 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:32.416424 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:32.416435 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:32.416446 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:32.416456 | orchestrator | 2026-01-07 00:28:32.416467 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-07 00:28:32.416486 | orchestrator | Wednesday 07 January 2026 00:28:28 +0000 (0:00:01.097) 0:03:38.773 ***** 2026-01-07 00:28:32.416497 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:32.416508 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:32.416518 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:32.416529 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:32.416541 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:32.416554 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:32.416571 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:32.416584 | orchestrator | 2026-01-07 00:28:32.416596 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-07 00:28:32.416648 | orchestrator | Wednesday 07 January 2026 00:28:29 +0000 (0:00:01.149) 0:03:39.923 ***** 2026-01-07 00:28:32.416671 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:32.416691 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:32.416710 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:32.416728 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:32.416747 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:32.416767 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:32.416785 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:32.416805 | orchestrator | 2026-01-07 00:28:32.416824 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-07 00:28:32.416840 | orchestrator | Wednesday 07 January 2026 00:28:30 +0000 (0:00:01.113) 0:03:41.037 ***** 2026-01-07 00:28:32.416852 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:32.416864 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:32.416878 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:32.416890 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:32.416902 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:32.416914 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:32.416925 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:32.416935 | orchestrator | 2026-01-07 00:28:32.416946 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-07 00:28:32.416957 | orchestrator | Wednesday 07 January 2026 00:28:31 +0000 (0:00:00.253) 0:03:41.291 ***** 2026-01-07 00:28:32.416968 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:32.416979 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:32.416990 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:32.417000 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:32.417011 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:32.417021 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:32.417032 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:32.417042 | orchestrator | 2026-01-07 00:28:32.417053 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-07 00:28:32.417063 | orchestrator | Wednesday 07 January 2026 00:28:32 +0000 (0:00:00.769) 0:03:42.060 ***** 2026-01-07 00:28:32.417076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:32.417088 | orchestrator | 2026-01-07 00:28:32.417100 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-07 00:28:32.417120 | orchestrator | Wednesday 07 January 2026 00:28:32 +0000 (0:00:00.382) 0:03:42.442 ***** 2026-01-07 00:29:50.776320 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776414 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:50.776426 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:50.776432 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:50.776439 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:50.776445 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:50.776451 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:50.776457 | orchestrator | 2026-01-07 00:29:50.776464 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-07 00:29:50.776473 | orchestrator | Wednesday 07 January 2026 00:28:41 +0000 (0:00:08.984) 0:03:51.426 ***** 2026-01-07 00:29:50.776501 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776505 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776509 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776513 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776517 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776521 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776525 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776529 | orchestrator | 2026-01-07 00:29:50.776533 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-07 00:29:50.776537 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:01.369) 0:03:52.796 ***** 2026-01-07 00:29:50.776541 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776544 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776548 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776552 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776556 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776559 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776563 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776567 | orchestrator | 2026-01-07 00:29:50.776571 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-07 00:29:50.776575 | orchestrator | Wednesday 07 January 2026 00:28:43 +0000 (0:00:01.200) 0:03:53.996 ***** 2026-01-07 00:29:50.776578 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776582 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776586 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776590 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776593 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776597 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776601 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776605 | orchestrator | 2026-01-07 00:29:50.776632 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-07 00:29:50.776638 | orchestrator | Wednesday 07 January 2026 00:28:44 +0000 (0:00:00.264) 0:03:54.260 ***** 2026-01-07 00:29:50.776642 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776645 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776649 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776653 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776657 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776660 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776664 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776668 | orchestrator | 2026-01-07 00:29:50.776672 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-07 00:29:50.776675 | orchestrator | Wednesday 07 January 2026 00:28:44 +0000 (0:00:00.277) 0:03:54.537 ***** 2026-01-07 00:29:50.776679 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776683 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776687 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776690 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776694 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776698 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776701 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776705 | orchestrator | 2026-01-07 00:29:50.776720 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-07 00:29:50.776724 | orchestrator | Wednesday 07 January 2026 00:28:44 +0000 (0:00:00.271) 0:03:54.809 ***** 2026-01-07 00:29:50.776728 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.776732 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.776736 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.776740 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.776743 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.776747 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.776751 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.776754 | orchestrator | 2026-01-07 00:29:50.776758 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-07 00:29:50.776762 | orchestrator | Wednesday 07 January 2026 00:28:50 +0000 (0:00:05.487) 0:04:00.297 ***** 2026-01-07 00:29:50.776771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:50.776777 | orchestrator | 2026-01-07 00:29:50.776781 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-07 00:29:50.776784 | orchestrator | Wednesday 07 January 2026 00:28:50 +0000 (0:00:00.364) 0:04:00.662 ***** 2026-01-07 00:29:50.776788 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776792 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-07 00:29:50.776796 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776800 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-07 00:29:50.776804 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:29:50.776808 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:29:50.776812 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776815 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-07 00:29:50.776819 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776823 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-07 00:29:50.776827 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:29:50.776830 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776834 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:29:50.776838 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-07 00:29:50.776842 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776845 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-07 00:29:50.776860 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:29:50.776864 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:29:50.776868 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-07 00:29:50.776872 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-07 00:29:50.776875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:29:50.776880 | orchestrator | 2026-01-07 00:29:50.776885 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-07 00:29:50.776889 | orchestrator | Wednesday 07 January 2026 00:28:50 +0000 (0:00:00.295) 0:04:00.957 ***** 2026-01-07 00:29:50.776894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:50.776899 | orchestrator | 2026-01-07 00:29:50.776903 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-07 00:29:50.776908 | orchestrator | Wednesday 07 January 2026 00:28:51 +0000 (0:00:00.355) 0:04:01.312 ***** 2026-01-07 00:29:50.776912 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-07 00:29:50.776917 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:29:50.776921 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-07 00:29:50.776925 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:29:50.776930 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-07 00:29:50.776934 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-07 00:29:50.776939 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:29:50.776944 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-07 00:29:50.776948 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:29:50.776953 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-07 00:29:50.776957 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:29:50.776962 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:29:50.776970 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-07 00:29:50.776974 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:29:50.776978 | orchestrator | 2026-01-07 00:29:50.776983 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-07 00:29:50.776988 | orchestrator | Wednesday 07 January 2026 00:28:51 +0000 (0:00:00.264) 0:04:01.577 ***** 2026-01-07 00:29:50.776992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:29:50.776997 | orchestrator | 2026-01-07 00:29:50.777001 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-07 00:29:50.777005 | orchestrator | Wednesday 07 January 2026 00:28:51 +0000 (0:00:00.382) 0:04:01.959 ***** 2026-01-07 00:29:50.777010 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:50.777014 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:50.777018 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:50.777023 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:50.777028 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:50.777033 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:50.777037 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:50.777041 | orchestrator | 2026-01-07 00:29:50.777046 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-07 00:29:50.777051 | orchestrator | Wednesday 07 January 2026 00:29:25 +0000 (0:00:33.262) 0:04:35.222 ***** 2026-01-07 00:29:50.777055 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:50.777058 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:50.777062 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:50.777066 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:50.777070 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:50.777078 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:50.777082 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:50.777085 | orchestrator | 2026-01-07 00:29:50.777089 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-07 00:29:50.777093 | orchestrator | Wednesday 07 January 2026 00:29:33 +0000 (0:00:08.610) 0:04:43.833 ***** 2026-01-07 00:29:50.777097 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:50.777101 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:50.777104 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:50.777108 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:50.777112 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:50.777116 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:50.777119 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:50.777123 | orchestrator | 2026-01-07 00:29:50.777127 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-07 00:29:50.777131 | orchestrator | Wednesday 07 January 2026 00:29:41 +0000 (0:00:08.041) 0:04:51.874 ***** 2026-01-07 00:29:50.777135 | orchestrator | ok: [testbed-manager] 2026-01-07 00:29:50.777138 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:29:50.777142 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:29:50.777146 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:29:50.777150 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:29:50.777153 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:29:50.777157 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:29:50.777161 | orchestrator | 2026-01-07 00:29:50.777165 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-07 00:29:50.777169 | orchestrator | Wednesday 07 January 2026 00:29:43 +0000 (0:00:01.790) 0:04:53.665 ***** 2026-01-07 00:29:50.777173 | orchestrator | changed: [testbed-manager] 2026-01-07 00:29:50.777176 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:29:50.777180 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:29:50.777184 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:29:50.777188 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:29:50.777194 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:29:50.777198 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:29:50.777202 | orchestrator | 2026-01-07 00:29:50.777209 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-07 00:30:03.027321 | orchestrator | Wednesday 07 January 2026 00:29:50 +0000 (0:00:07.134) 0:05:00.799 ***** 2026-01-07 00:30:03.027446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:30:03.027462 | orchestrator | 2026-01-07 00:30:03.027467 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-07 00:30:03.027472 | orchestrator | Wednesday 07 January 2026 00:29:51 +0000 (0:00:00.491) 0:05:01.291 ***** 2026-01-07 00:30:03.027477 | orchestrator | changed: [testbed-manager] 2026-01-07 00:30:03.027483 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:30:03.027488 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:30:03.027491 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:30:03.027495 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:30:03.027499 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:30:03.027503 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:30:03.027507 | orchestrator | 2026-01-07 00:30:03.027511 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-07 00:30:03.027515 | orchestrator | Wednesday 07 January 2026 00:29:52 +0000 (0:00:00.747) 0:05:02.039 ***** 2026-01-07 00:30:03.027519 | orchestrator | ok: [testbed-manager] 2026-01-07 00:30:03.027524 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:30:03.027528 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:30:03.027531 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:30:03.027535 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:30:03.027539 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:30:03.027543 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:30:03.027547 | orchestrator | 2026-01-07 00:30:03.027550 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-07 00:30:03.027554 | orchestrator | Wednesday 07 January 2026 00:29:54 +0000 (0:00:02.966) 0:05:05.005 ***** 2026-01-07 00:30:03.027558 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:30:03.027562 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:30:03.027566 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:30:03.027570 | orchestrator | changed: [testbed-manager] 2026-01-07 00:30:03.027573 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:30:03.027577 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:30:03.027581 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:30:03.027585 | orchestrator | 2026-01-07 00:30:03.027589 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-07 00:30:03.027592 | orchestrator | Wednesday 07 January 2026 00:29:55 +0000 (0:00:00.719) 0:05:05.724 ***** 2026-01-07 00:30:03.027596 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.027600 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.027604 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.027629 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:30:03.027635 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:30:03.027641 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:30:03.027647 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:30:03.027653 | orchestrator | 2026-01-07 00:30:03.027659 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-07 00:30:03.027665 | orchestrator | Wednesday 07 January 2026 00:29:55 +0000 (0:00:00.254) 0:05:05.978 ***** 2026-01-07 00:30:03.027672 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.027678 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.027699 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.027705 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:30:03.027711 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:30:03.027736 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:30:03.027744 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:30:03.027750 | orchestrator | 2026-01-07 00:30:03.027759 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-07 00:30:03.027765 | orchestrator | Wednesday 07 January 2026 00:29:56 +0000 (0:00:00.358) 0:05:06.337 ***** 2026-01-07 00:30:03.027771 | orchestrator | ok: [testbed-manager] 2026-01-07 00:30:03.027776 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:30:03.027782 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:30:03.027800 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:30:03.027806 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:30:03.027812 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:30:03.027818 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:30:03.027825 | orchestrator | 2026-01-07 00:30:03.027832 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-07 00:30:03.027836 | orchestrator | Wednesday 07 January 2026 00:29:56 +0000 (0:00:00.268) 0:05:06.605 ***** 2026-01-07 00:30:03.027840 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.027844 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.027848 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.027852 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:30:03.027855 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:30:03.027859 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:30:03.027863 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:30:03.027868 | orchestrator | 2026-01-07 00:30:03.027874 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-07 00:30:03.027882 | orchestrator | Wednesday 07 January 2026 00:29:56 +0000 (0:00:00.268) 0:05:06.873 ***** 2026-01-07 00:30:03.027887 | orchestrator | ok: [testbed-manager] 2026-01-07 00:30:03.027894 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:30:03.027900 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:30:03.027907 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:30:03.027914 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:30:03.027922 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:30:03.027928 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:30:03.027934 | orchestrator | 2026-01-07 00:30:03.027951 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-07 00:30:03.027958 | orchestrator | Wednesday 07 January 2026 00:29:57 +0000 (0:00:00.288) 0:05:07.161 ***** 2026-01-07 00:30:03.027971 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:30:03.027976 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.027983 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:30:03.027989 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.027994 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:30:03.028001 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.028008 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:30:03.028014 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.028038 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:30:03.028045 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.028052 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:30:03.028059 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.028065 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:30:03.028071 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:30:03.028077 | orchestrator | 2026-01-07 00:30:03.028085 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-07 00:30:03.028091 | orchestrator | Wednesday 07 January 2026 00:29:57 +0000 (0:00:00.244) 0:05:07.406 ***** 2026-01-07 00:30:03.028097 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:30:03.028103 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028110 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:30:03.028116 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028121 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:30:03.028127 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028133 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:30:03.028149 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028154 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:30:03.028158 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028163 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:30:03.028167 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028172 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:30:03.028177 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:30:03.028182 | orchestrator | 2026-01-07 00:30:03.028186 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-07 00:30:03.028191 | orchestrator | Wednesday 07 January 2026 00:29:57 +0000 (0:00:00.269) 0:05:07.676 ***** 2026-01-07 00:30:03.028196 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.028201 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.028204 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.028208 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:30:03.028212 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:30:03.028216 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:30:03.028219 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:30:03.028223 | orchestrator | 2026-01-07 00:30:03.028227 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-07 00:30:03.028231 | orchestrator | Wednesday 07 January 2026 00:29:57 +0000 (0:00:00.250) 0:05:07.926 ***** 2026-01-07 00:30:03.028235 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.028238 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.028242 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.028246 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:30:03.028250 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:30:03.028253 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:30:03.028257 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:30:03.028261 | orchestrator | 2026-01-07 00:30:03.028264 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-07 00:30:03.028268 | orchestrator | Wednesday 07 January 2026 00:29:58 +0000 (0:00:00.263) 0:05:08.190 ***** 2026-01-07 00:30:03.028274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:30:03.028279 | orchestrator | 2026-01-07 00:30:03.028289 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-07 00:30:03.028293 | orchestrator | Wednesday 07 January 2026 00:29:58 +0000 (0:00:00.387) 0:05:08.577 ***** 2026-01-07 00:30:03.028297 | orchestrator | ok: [testbed-manager] 2026-01-07 00:30:03.028300 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:30:03.028304 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:30:03.028308 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:30:03.028312 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:30:03.028316 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:30:03.028319 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:30:03.028323 | orchestrator | 2026-01-07 00:30:03.028327 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-07 00:30:03.028331 | orchestrator | Wednesday 07 January 2026 00:29:59 +0000 (0:00:00.905) 0:05:09.483 ***** 2026-01-07 00:30:03.028334 | orchestrator | ok: [testbed-manager] 2026-01-07 00:30:03.028338 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:30:03.028342 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:30:03.028346 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:30:03.028349 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:30:03.028353 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:30:03.028357 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:30:03.028360 | orchestrator | 2026-01-07 00:30:03.028365 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-07 00:30:03.028370 | orchestrator | Wednesday 07 January 2026 00:30:02 +0000 (0:00:03.213) 0:05:12.696 ***** 2026-01-07 00:30:03.028378 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-07 00:30:03.028382 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-07 00:30:03.028386 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-07 00:30:03.028390 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:30:03.028394 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-07 00:30:03.028398 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-07 00:30:03.028401 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-07 00:30:03.028405 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-07 00:30:03.028409 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-07 00:30:03.028413 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-07 00:30:03.028416 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:30:03.028420 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-07 00:30:03.028424 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-07 00:30:03.028428 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-07 00:30:03.028431 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:30:03.028435 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-07 00:30:03.028444 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-07 00:31:05.167529 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-07 00:31:05.167747 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:05.167766 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-07 00:31:05.167778 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-07 00:31:05.167790 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-07 00:31:05.167801 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:05.167812 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:05.167823 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-07 00:31:05.167834 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-07 00:31:05.167845 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-07 00:31:05.167855 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:05.167867 | orchestrator | 2026-01-07 00:31:05.167879 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-07 00:31:05.167891 | orchestrator | Wednesday 07 January 2026 00:30:03 +0000 (0:00:00.538) 0:05:13.235 ***** 2026-01-07 00:31:05.167903 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.167914 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.167925 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.167936 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.167947 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.167958 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.167969 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.167980 | orchestrator | 2026-01-07 00:31:05.167991 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-07 00:31:05.168003 | orchestrator | Wednesday 07 January 2026 00:30:09 +0000 (0:00:06.587) 0:05:19.822 ***** 2026-01-07 00:31:05.168014 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.168025 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.168036 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168049 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168062 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168075 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.168088 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168102 | orchestrator | 2026-01-07 00:31:05.168115 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-07 00:31:05.168129 | orchestrator | Wednesday 07 January 2026 00:30:10 +0000 (0:00:01.113) 0:05:20.936 ***** 2026-01-07 00:31:05.168141 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.168154 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.168193 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168206 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.168218 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168232 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168245 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168257 | orchestrator | 2026-01-07 00:31:05.168270 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-07 00:31:05.168283 | orchestrator | Wednesday 07 January 2026 00:30:19 +0000 (0:00:08.648) 0:05:29.585 ***** 2026-01-07 00:31:05.168296 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168309 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.168322 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.168334 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168348 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168378 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.168391 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168405 | orchestrator | 2026-01-07 00:31:05.168417 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-07 00:31:05.168428 | orchestrator | Wednesday 07 January 2026 00:30:22 +0000 (0:00:03.201) 0:05:32.786 ***** 2026-01-07 00:31:05.168439 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.168450 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168461 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.168472 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168483 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168494 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.168504 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168515 | orchestrator | 2026-01-07 00:31:05.168526 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-07 00:31:05.168537 | orchestrator | Wednesday 07 January 2026 00:30:24 +0000 (0:00:01.405) 0:05:34.192 ***** 2026-01-07 00:31:05.168548 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.168559 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168570 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.168581 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168592 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168627 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.168646 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168664 | orchestrator | 2026-01-07 00:31:05.168682 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-07 00:31:05.168701 | orchestrator | Wednesday 07 January 2026 00:30:25 +0000 (0:00:01.601) 0:05:35.793 ***** 2026-01-07 00:31:05.168719 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:05.168737 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:05.168755 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:05.168773 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:05.168792 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:05.168810 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:05.168828 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.168847 | orchestrator | 2026-01-07 00:31:05.168866 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-07 00:31:05.168884 | orchestrator | Wednesday 07 January 2026 00:30:26 +0000 (0:00:00.609) 0:05:36.403 ***** 2026-01-07 00:31:05.168902 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.168919 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.168936 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.168952 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.168970 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.168987 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.169005 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.169023 | orchestrator | 2026-01-07 00:31:05.169042 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-07 00:31:05.169085 | orchestrator | Wednesday 07 January 2026 00:30:36 +0000 (0:00:10.187) 0:05:46.590 ***** 2026-01-07 00:31:05.169118 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.169136 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.169154 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.169172 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.169189 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.169206 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.169223 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.169240 | orchestrator | 2026-01-07 00:31:05.169257 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-07 00:31:05.169274 | orchestrator | Wednesday 07 January 2026 00:30:37 +0000 (0:00:00.947) 0:05:47.538 ***** 2026-01-07 00:31:05.169290 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.169307 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.169324 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.169340 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.169359 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.169377 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.169395 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.169413 | orchestrator | 2026-01-07 00:31:05.169431 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-07 00:31:05.169449 | orchestrator | Wednesday 07 January 2026 00:30:47 +0000 (0:00:09.728) 0:05:57.267 ***** 2026-01-07 00:31:05.169466 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.169484 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.169501 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.169519 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.169538 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.169555 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.169574 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.169592 | orchestrator | 2026-01-07 00:31:05.169663 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-07 00:31:05.169677 | orchestrator | Wednesday 07 January 2026 00:30:58 +0000 (0:00:11.534) 0:06:08.801 ***** 2026-01-07 00:31:05.169688 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-07 00:31:05.169699 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-07 00:31:05.169710 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-07 00:31:05.169721 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-07 00:31:05.169732 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-07 00:31:05.169742 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-07 00:31:05.169753 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-07 00:31:05.169764 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-07 00:31:05.169775 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-07 00:31:05.169785 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-07 00:31:05.169796 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-07 00:31:05.169807 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-07 00:31:05.169818 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-07 00:31:05.169828 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-07 00:31:05.169839 | orchestrator | 2026-01-07 00:31:05.169850 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-07 00:31:05.169861 | orchestrator | Wednesday 07 January 2026 00:30:59 +0000 (0:00:01.201) 0:06:10.002 ***** 2026-01-07 00:31:05.169872 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:05.169883 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:05.169894 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:05.169905 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:05.169916 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:05.169927 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:05.169938 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:05.169960 | orchestrator | 2026-01-07 00:31:05.169971 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-07 00:31:05.169982 | orchestrator | Wednesday 07 January 2026 00:31:00 +0000 (0:00:00.473) 0:06:10.476 ***** 2026-01-07 00:31:05.169993 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.170004 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.170089 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.170105 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.170117 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.170137 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.170155 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.170175 | orchestrator | 2026-01-07 00:31:05.170193 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-07 00:31:05.170215 | orchestrator | Wednesday 07 January 2026 00:31:04 +0000 (0:00:03.812) 0:06:14.288 ***** 2026-01-07 00:31:05.170233 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:05.170251 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:05.170271 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:05.170290 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:05.170309 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:05.170329 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:05.170349 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:05.170368 | orchestrator | 2026-01-07 00:31:05.170389 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-07 00:31:05.170409 | orchestrator | Wednesday 07 January 2026 00:31:04 +0000 (0:00:00.474) 0:06:14.763 ***** 2026-01-07 00:31:05.170430 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-07 00:31:05.170449 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-07 00:31:05.170469 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:05.170492 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-07 00:31:05.170513 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-07 00:31:05.170533 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:05.170550 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-07 00:31:05.170569 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-07 00:31:05.170588 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:05.170721 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-07 00:31:23.915689 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-07 00:31:23.915795 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:23.915807 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-07 00:31:23.915816 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-07 00:31:23.915824 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:23.915831 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-07 00:31:23.915870 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-07 00:31:23.915879 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:23.915886 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-07 00:31:23.915894 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-07 00:31:23.915902 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:23.915910 | orchestrator | 2026-01-07 00:31:23.915919 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-07 00:31:23.915927 | orchestrator | Wednesday 07 January 2026 00:31:05 +0000 (0:00:00.669) 0:06:15.432 ***** 2026-01-07 00:31:23.915933 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:23.915939 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:23.915946 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:23.915953 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:23.915960 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:23.915987 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:23.915994 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:23.916001 | orchestrator | 2026-01-07 00:31:23.916051 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-07 00:31:23.916060 | orchestrator | Wednesday 07 January 2026 00:31:05 +0000 (0:00:00.494) 0:06:15.927 ***** 2026-01-07 00:31:23.916066 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:23.916073 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:23.916079 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:23.916086 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:23.916092 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:23.916098 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:23.916104 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:23.916111 | orchestrator | 2026-01-07 00:31:23.916117 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-07 00:31:23.916124 | orchestrator | Wednesday 07 January 2026 00:31:06 +0000 (0:00:00.473) 0:06:16.401 ***** 2026-01-07 00:31:23.916131 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:23.916137 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:23.916144 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:23.916150 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:23.916157 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:23.916165 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:23.916172 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:23.916179 | orchestrator | 2026-01-07 00:31:23.916186 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-07 00:31:23.916192 | orchestrator | Wednesday 07 January 2026 00:31:06 +0000 (0:00:00.489) 0:06:16.890 ***** 2026-01-07 00:31:23.916199 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916205 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.916213 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.916223 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.916229 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.916234 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.916240 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.916245 | orchestrator | 2026-01-07 00:31:23.916251 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-07 00:31:23.916257 | orchestrator | Wednesday 07 January 2026 00:31:08 +0000 (0:00:01.881) 0:06:18.772 ***** 2026-01-07 00:31:23.916265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:23.916274 | orchestrator | 2026-01-07 00:31:23.916280 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-07 00:31:23.916286 | orchestrator | Wednesday 07 January 2026 00:31:09 +0000 (0:00:00.815) 0:06:19.587 ***** 2026-01-07 00:31:23.916292 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916299 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:23.916305 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:23.916310 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:23.916316 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:23.916322 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:23.916329 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:23.916335 | orchestrator | 2026-01-07 00:31:23.916341 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-07 00:31:23.916347 | orchestrator | Wednesday 07 January 2026 00:31:10 +0000 (0:00:00.820) 0:06:20.408 ***** 2026-01-07 00:31:23.916352 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916389 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:23.916398 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:23.916404 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:23.916411 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:23.916426 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:23.916433 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:23.916439 | orchestrator | 2026-01-07 00:31:23.916445 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-07 00:31:23.916452 | orchestrator | Wednesday 07 January 2026 00:31:11 +0000 (0:00:00.810) 0:06:21.218 ***** 2026-01-07 00:31:23.916458 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916464 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:23.916471 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:23.916478 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:23.916485 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:23.916507 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:23.916515 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:23.916531 | orchestrator | 2026-01-07 00:31:23.916538 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-07 00:31:23.916564 | orchestrator | Wednesday 07 January 2026 00:31:12 +0000 (0:00:01.473) 0:06:22.691 ***** 2026-01-07 00:31:23.916571 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:23.916598 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.916604 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.916610 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.916616 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.916622 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.916628 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.916634 | orchestrator | 2026-01-07 00:31:23.916640 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-07 00:31:23.916646 | orchestrator | Wednesday 07 January 2026 00:31:14 +0000 (0:00:01.417) 0:06:24.109 ***** 2026-01-07 00:31:23.916653 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916660 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:23.916666 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:23.916673 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:23.916680 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:23.916686 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:23.916693 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:23.916699 | orchestrator | 2026-01-07 00:31:23.916705 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-07 00:31:23.916712 | orchestrator | Wednesday 07 January 2026 00:31:15 +0000 (0:00:01.419) 0:06:25.529 ***** 2026-01-07 00:31:23.916718 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:23.916725 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:23.916732 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:23.916738 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:23.916745 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:23.916751 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:23.916758 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:23.916765 | orchestrator | 2026-01-07 00:31:23.916772 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-07 00:31:23.916779 | orchestrator | Wednesday 07 January 2026 00:31:16 +0000 (0:00:01.430) 0:06:26.959 ***** 2026-01-07 00:31:23.916786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:23.916794 | orchestrator | 2026-01-07 00:31:23.916801 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-07 00:31:23.916808 | orchestrator | Wednesday 07 January 2026 00:31:17 +0000 (0:00:00.934) 0:06:27.894 ***** 2026-01-07 00:31:23.916815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916821 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.916828 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.916835 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.916841 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.916848 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.916868 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.916875 | orchestrator | 2026-01-07 00:31:23.916882 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-07 00:31:23.916889 | orchestrator | Wednesday 07 January 2026 00:31:19 +0000 (0:00:01.348) 0:06:29.242 ***** 2026-01-07 00:31:23.916895 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916902 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.916936 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.916944 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.916951 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.916957 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.916964 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.916970 | orchestrator | 2026-01-07 00:31:23.916977 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-07 00:31:23.916984 | orchestrator | Wednesday 07 January 2026 00:31:20 +0000 (0:00:01.098) 0:06:30.340 ***** 2026-01-07 00:31:23.916990 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.916997 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.917003 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.917009 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.917016 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.917022 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.917029 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.917035 | orchestrator | 2026-01-07 00:31:23.917042 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-07 00:31:23.917048 | orchestrator | Wednesday 07 January 2026 00:31:21 +0000 (0:00:01.103) 0:06:31.443 ***** 2026-01-07 00:31:23.917055 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:23.917062 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:23.917068 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:23.917075 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:23.917081 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:23.917088 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:23.917094 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:23.917101 | orchestrator | 2026-01-07 00:31:23.917107 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-07 00:31:23.917114 | orchestrator | Wednesday 07 January 2026 00:31:22 +0000 (0:00:01.353) 0:06:32.797 ***** 2026-01-07 00:31:23.917120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:23.917127 | orchestrator | 2026-01-07 00:31:23.917134 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:23.917140 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.853) 0:06:33.651 ***** 2026-01-07 00:31:23.917146 | orchestrator | 2026-01-07 00:31:23.917152 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:23.917158 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.037) 0:06:33.689 ***** 2026-01-07 00:31:23.917165 | orchestrator | 2026-01-07 00:31:23.917171 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:23.917177 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.037) 0:06:33.726 ***** 2026-01-07 00:31:23.917183 | orchestrator | 2026-01-07 00:31:23.917189 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:23.917204 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.042) 0:06:33.769 ***** 2026-01-07 00:31:49.893231 | orchestrator | 2026-01-07 00:31:49.893372 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:49.893398 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.037) 0:06:33.806 ***** 2026-01-07 00:31:49.893418 | orchestrator | 2026-01-07 00:31:49.893438 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:49.893457 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.036) 0:06:33.843 ***** 2026-01-07 00:31:49.893509 | orchestrator | 2026-01-07 00:31:49.893529 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:31:49.893617 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.042) 0:06:33.885 ***** 2026-01-07 00:31:49.893638 | orchestrator | 2026-01-07 00:31:49.893658 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:31:49.893678 | orchestrator | Wednesday 07 January 2026 00:31:23 +0000 (0:00:00.048) 0:06:33.934 ***** 2026-01-07 00:31:49.893697 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.893718 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.893738 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.893757 | orchestrator | 2026-01-07 00:31:49.893777 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-07 00:31:49.893797 | orchestrator | Wednesday 07 January 2026 00:31:25 +0000 (0:00:01.175) 0:06:35.110 ***** 2026-01-07 00:31:49.893817 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:49.893838 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.893857 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.893875 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.893893 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.893910 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.893928 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.893946 | orchestrator | 2026-01-07 00:31:49.893964 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-07 00:31:49.893982 | orchestrator | Wednesday 07 January 2026 00:31:26 +0000 (0:00:01.579) 0:06:36.690 ***** 2026-01-07 00:31:49.894001 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:49.894095 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.894119 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.894137 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.894158 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.894177 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.894196 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.894216 | orchestrator | 2026-01-07 00:31:49.894236 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-07 00:31:49.894257 | orchestrator | Wednesday 07 January 2026 00:31:27 +0000 (0:00:01.236) 0:06:37.926 ***** 2026-01-07 00:31:49.894297 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.894331 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.894351 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.894371 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.894391 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.894410 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.894428 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.894448 | orchestrator | 2026-01-07 00:31:49.894468 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-07 00:31:49.894509 | orchestrator | Wednesday 07 January 2026 00:31:30 +0000 (0:00:02.397) 0:06:40.323 ***** 2026-01-07 00:31:49.894530 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.894574 | orchestrator | 2026-01-07 00:31:49.894595 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-07 00:31:49.894615 | orchestrator | Wednesday 07 January 2026 00:31:30 +0000 (0:00:00.103) 0:06:40.427 ***** 2026-01-07 00:31:49.894635 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.894654 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.894674 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.894694 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.894714 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.894734 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:49.894755 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.894775 | orchestrator | 2026-01-07 00:31:49.894795 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-07 00:31:49.894816 | orchestrator | Wednesday 07 January 2026 00:31:31 +0000 (0:00:01.012) 0:06:41.440 ***** 2026-01-07 00:31:49.894853 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.894872 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.894891 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:49.894911 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:49.894929 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:49.894946 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:49.894964 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:49.894984 | orchestrator | 2026-01-07 00:31:49.895004 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-07 00:31:49.895023 | orchestrator | Wednesday 07 January 2026 00:31:31 +0000 (0:00:00.486) 0:06:41.927 ***** 2026-01-07 00:31:49.895045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:49.895068 | orchestrator | 2026-01-07 00:31:49.895088 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-07 00:31:49.895107 | orchestrator | Wednesday 07 January 2026 00:31:32 +0000 (0:00:01.057) 0:06:42.984 ***** 2026-01-07 00:31:49.895127 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.895147 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:49.895167 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:49.895185 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:49.895203 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.895222 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.895242 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.895261 | orchestrator | 2026-01-07 00:31:49.895281 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-07 00:31:49.895301 | orchestrator | Wednesday 07 January 2026 00:31:33 +0000 (0:00:00.883) 0:06:43.868 ***** 2026-01-07 00:31:49.895322 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-07 00:31:49.895370 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-07 00:31:49.895391 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-07 00:31:49.895410 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-07 00:31:49.895429 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-07 00:31:49.895448 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-07 00:31:49.895468 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-07 00:31:49.895488 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-07 00:31:49.895508 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-07 00:31:49.895527 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-07 00:31:49.895621 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-07 00:31:49.895643 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-07 00:31:49.895662 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-07 00:31:49.895682 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-07 00:31:49.895702 | orchestrator | 2026-01-07 00:31:49.895722 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-07 00:31:49.895742 | orchestrator | Wednesday 07 January 2026 00:31:36 +0000 (0:00:02.444) 0:06:46.312 ***** 2026-01-07 00:31:49.895761 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.895781 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.895799 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:49.895817 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:49.895836 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:49.895856 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:49.895875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:49.895895 | orchestrator | 2026-01-07 00:31:49.895915 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-07 00:31:49.895935 | orchestrator | Wednesday 07 January 2026 00:31:36 +0000 (0:00:00.630) 0:06:46.943 ***** 2026-01-07 00:31:49.895970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:49.895993 | orchestrator | 2026-01-07 00:31:49.896012 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-07 00:31:49.896030 | orchestrator | Wednesday 07 January 2026 00:31:37 +0000 (0:00:00.741) 0:06:47.684 ***** 2026-01-07 00:31:49.896049 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.896067 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:49.896086 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:49.896105 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:49.896124 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.896143 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.896295 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.896318 | orchestrator | 2026-01-07 00:31:49.896338 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-07 00:31:49.896371 | orchestrator | Wednesday 07 January 2026 00:31:38 +0000 (0:00:00.836) 0:06:48.521 ***** 2026-01-07 00:31:49.896392 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.896412 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:49.896432 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:49.896452 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:49.896472 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.896492 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.896511 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.896531 | orchestrator | 2026-01-07 00:31:49.896626 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-07 00:31:49.896648 | orchestrator | Wednesday 07 January 2026 00:31:39 +0000 (0:00:00.963) 0:06:49.484 ***** 2026-01-07 00:31:49.896667 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.896687 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.896707 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:49.896726 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:49.896745 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:49.896762 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:49.896780 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:49.896797 | orchestrator | 2026-01-07 00:31:49.896816 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-07 00:31:49.896833 | orchestrator | Wednesday 07 January 2026 00:31:39 +0000 (0:00:00.442) 0:06:49.927 ***** 2026-01-07 00:31:49.896852 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.896870 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:49.896887 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:49.896905 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:49.896922 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:49.896938 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:49.896954 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:49.896971 | orchestrator | 2026-01-07 00:31:49.896989 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-07 00:31:49.897005 | orchestrator | Wednesday 07 January 2026 00:31:41 +0000 (0:00:01.530) 0:06:51.458 ***** 2026-01-07 00:31:49.897022 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:49.897040 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:49.897055 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:49.897071 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:49.897089 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:49.897106 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:49.897124 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:49.897140 | orchestrator | 2026-01-07 00:31:49.897155 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-07 00:31:49.897172 | orchestrator | Wednesday 07 January 2026 00:31:41 +0000 (0:00:00.478) 0:06:51.936 ***** 2026-01-07 00:31:49.897204 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:49.897221 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:49.897238 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:49.897256 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:49.897273 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:49.897291 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:49.897326 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:21.706267 | orchestrator | 2026-01-07 00:32:21.706360 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-07 00:32:21.706371 | orchestrator | Wednesday 07 January 2026 00:31:49 +0000 (0:00:07.981) 0:06:59.917 ***** 2026-01-07 00:32:21.706378 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706386 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:21.706393 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:21.706399 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:21.706405 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:21.706411 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:21.706417 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:21.706423 | orchestrator | 2026-01-07 00:32:21.706429 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-07 00:32:21.706436 | orchestrator | Wednesday 07 January 2026 00:31:51 +0000 (0:00:01.603) 0:07:01.521 ***** 2026-01-07 00:32:21.706442 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706448 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:21.706454 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:21.706460 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:21.706467 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:21.706473 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:21.706480 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:21.706487 | orchestrator | 2026-01-07 00:32:21.706494 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-07 00:32:21.706550 | orchestrator | Wednesday 07 January 2026 00:31:53 +0000 (0:00:01.759) 0:07:03.280 ***** 2026-01-07 00:32:21.706557 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706562 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:21.706569 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:21.706575 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:21.706582 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:21.706588 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:21.706594 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:21.706603 | orchestrator | 2026-01-07 00:32:21.706611 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:32:21.706617 | orchestrator | Wednesday 07 January 2026 00:31:54 +0000 (0:00:01.647) 0:07:04.928 ***** 2026-01-07 00:32:21.706623 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706629 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.706635 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.706641 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.706647 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.706653 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.706660 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.706667 | orchestrator | 2026-01-07 00:32:21.706673 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:32:21.706680 | orchestrator | Wednesday 07 January 2026 00:31:55 +0000 (0:00:00.896) 0:07:05.824 ***** 2026-01-07 00:32:21.706688 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:21.706693 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:21.706697 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:21.706701 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:21.706705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:21.706709 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:21.706713 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:21.706717 | orchestrator | 2026-01-07 00:32:21.706721 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-07 00:32:21.706743 | orchestrator | Wednesday 07 January 2026 00:31:56 +0000 (0:00:00.919) 0:07:06.743 ***** 2026-01-07 00:32:21.706748 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:21.706751 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:21.706755 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:21.706759 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:21.706763 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:21.706767 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:21.706771 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:21.706774 | orchestrator | 2026-01-07 00:32:21.706778 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-07 00:32:21.706782 | orchestrator | Wednesday 07 January 2026 00:31:57 +0000 (0:00:00.472) 0:07:07.216 ***** 2026-01-07 00:32:21.706786 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706790 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.706794 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.706797 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.706801 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.706805 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.706808 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.706812 | orchestrator | 2026-01-07 00:32:21.706816 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-07 00:32:21.706820 | orchestrator | Wednesday 07 January 2026 00:31:57 +0000 (0:00:00.479) 0:07:07.696 ***** 2026-01-07 00:32:21.706824 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706828 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.706833 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.706837 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.706841 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.706845 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.706850 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.706854 | orchestrator | 2026-01-07 00:32:21.706858 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-07 00:32:21.706863 | orchestrator | Wednesday 07 January 2026 00:31:58 +0000 (0:00:00.477) 0:07:08.174 ***** 2026-01-07 00:32:21.706867 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706871 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.706876 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.706880 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.706885 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.706889 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.706893 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.706897 | orchestrator | 2026-01-07 00:32:21.706902 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-07 00:32:21.706906 | orchestrator | Wednesday 07 January 2026 00:31:58 +0000 (0:00:00.657) 0:07:08.831 ***** 2026-01-07 00:32:21.706911 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.706915 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.706919 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.706924 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.706928 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.706932 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.706936 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.706941 | orchestrator | 2026-01-07 00:32:21.706957 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-07 00:32:21.706962 | orchestrator | Wednesday 07 January 2026 00:32:03 +0000 (0:00:04.999) 0:07:13.831 ***** 2026-01-07 00:32:21.706966 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:21.706971 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:21.706975 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:21.706980 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:21.706984 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:21.706989 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:21.706994 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:21.706998 | orchestrator | 2026-01-07 00:32:21.707002 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-07 00:32:21.707011 | orchestrator | Wednesday 07 January 2026 00:32:04 +0000 (0:00:00.481) 0:07:14.312 ***** 2026-01-07 00:32:21.707017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:21.707023 | orchestrator | 2026-01-07 00:32:21.707028 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-07 00:32:21.707032 | orchestrator | Wednesday 07 January 2026 00:32:05 +0000 (0:00:00.944) 0:07:15.257 ***** 2026-01-07 00:32:21.707036 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.707041 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.707045 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.707050 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.707054 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.707058 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.707062 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.707067 | orchestrator | 2026-01-07 00:32:21.707071 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-07 00:32:21.707076 | orchestrator | Wednesday 07 January 2026 00:32:07 +0000 (0:00:01.891) 0:07:17.148 ***** 2026-01-07 00:32:21.707080 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.707085 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.707089 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.707094 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.707098 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.707102 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.707105 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.707109 | orchestrator | 2026-01-07 00:32:21.707113 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-07 00:32:21.707117 | orchestrator | Wednesday 07 January 2026 00:32:08 +0000 (0:00:01.132) 0:07:18.281 ***** 2026-01-07 00:32:21.707120 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:21.707124 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:21.707128 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:21.707132 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:21.707135 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:21.707153 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:21.707157 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:21.707161 | orchestrator | 2026-01-07 00:32:21.707164 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-07 00:32:21.707168 | orchestrator | Wednesday 07 January 2026 00:32:09 +0000 (0:00:00.852) 0:07:19.134 ***** 2026-01-07 00:32:21.707176 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707181 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707185 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707189 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707193 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707197 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707200 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:32:21.707204 | orchestrator | 2026-01-07 00:32:21.707208 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-07 00:32:21.707215 | orchestrator | Wednesday 07 January 2026 00:32:10 +0000 (0:00:01.882) 0:07:21.016 ***** 2026-01-07 00:32:21.707219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:21.707223 | orchestrator | 2026-01-07 00:32:21.707227 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-07 00:32:21.707231 | orchestrator | Wednesday 07 January 2026 00:32:11 +0000 (0:00:00.759) 0:07:21.775 ***** 2026-01-07 00:32:21.707234 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:21.707238 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:21.707242 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:21.707246 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:21.707249 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:21.707253 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:21.707257 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:21.707261 | orchestrator | 2026-01-07 00:32:21.707268 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-07 00:32:53.261910 | orchestrator | Wednesday 07 January 2026 00:32:21 +0000 (0:00:09.952) 0:07:31.727 ***** 2026-01-07 00:32:53.262001 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:53.262012 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:53.262089 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:53.262099 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:53.262109 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:53.262119 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:53.262129 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:53.262139 | orchestrator | 2026-01-07 00:32:53.262150 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-07 00:32:53.262161 | orchestrator | Wednesday 07 January 2026 00:32:23 +0000 (0:00:01.890) 0:07:33.618 ***** 2026-01-07 00:32:53.262170 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:53.262181 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:53.262191 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:53.262201 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:53.262211 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:53.262221 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:53.262232 | orchestrator | 2026-01-07 00:32:53.262243 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-07 00:32:53.262253 | orchestrator | Wednesday 07 January 2026 00:32:25 +0000 (0:00:01.421) 0:07:35.040 ***** 2026-01-07 00:32:53.262264 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.262276 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.262287 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.262294 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.262300 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.262307 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.262313 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.262319 | orchestrator | 2026-01-07 00:32:53.262326 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-07 00:32:53.262332 | orchestrator | 2026-01-07 00:32:53.262348 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-07 00:32:53.262355 | orchestrator | Wednesday 07 January 2026 00:32:26 +0000 (0:00:01.250) 0:07:36.291 ***** 2026-01-07 00:32:53.262361 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:53.262368 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:53.262374 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:53.262380 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:53.262386 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:53.262392 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:53.262399 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:53.262414 | orchestrator | 2026-01-07 00:32:53.262420 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-07 00:32:53.262454 | orchestrator | 2026-01-07 00:32:53.262502 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-07 00:32:53.262509 | orchestrator | Wednesday 07 January 2026 00:32:26 +0000 (0:00:00.641) 0:07:36.932 ***** 2026-01-07 00:32:53.262516 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.262523 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.262530 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.262537 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.262543 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.262550 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.262558 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.262566 | orchestrator | 2026-01-07 00:32:53.262585 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-07 00:32:53.262592 | orchestrator | Wednesday 07 January 2026 00:32:28 +0000 (0:00:01.351) 0:07:38.284 ***** 2026-01-07 00:32:53.262599 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:53.262606 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:53.262613 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:53.262620 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:53.262627 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:53.262634 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:53.262641 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:53.262648 | orchestrator | 2026-01-07 00:32:53.262655 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-07 00:32:53.262662 | orchestrator | Wednesday 07 January 2026 00:32:29 +0000 (0:00:01.450) 0:07:39.734 ***** 2026-01-07 00:32:53.262669 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:53.262675 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:53.262683 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:53.262690 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:53.262696 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:53.262704 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:53.262710 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:53.262717 | orchestrator | 2026-01-07 00:32:53.262724 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-07 00:32:53.262732 | orchestrator | Wednesday 07 January 2026 00:32:30 +0000 (0:00:00.509) 0:07:40.244 ***** 2026-01-07 00:32:53.262739 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:53.262748 | orchestrator | 2026-01-07 00:32:53.262755 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-07 00:32:53.262762 | orchestrator | Wednesday 07 January 2026 00:32:31 +0000 (0:00:01.042) 0:07:41.286 ***** 2026-01-07 00:32:53.262771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:53.262781 | orchestrator | 2026-01-07 00:32:53.262789 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-07 00:32:53.262796 | orchestrator | Wednesday 07 January 2026 00:32:32 +0000 (0:00:00.929) 0:07:42.216 ***** 2026-01-07 00:32:53.262803 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.262810 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.262817 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.262823 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.262829 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.262835 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.262841 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.262847 | orchestrator | 2026-01-07 00:32:53.262868 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-07 00:32:53.262897 | orchestrator | Wednesday 07 January 2026 00:32:41 +0000 (0:00:09.689) 0:07:51.906 ***** 2026-01-07 00:32:53.262911 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.262918 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.262924 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.262930 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.262936 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.262942 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.262948 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.262954 | orchestrator | 2026-01-07 00:32:53.262961 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-07 00:32:53.262967 | orchestrator | Wednesday 07 January 2026 00:32:42 +0000 (0:00:01.028) 0:07:52.935 ***** 2026-01-07 00:32:53.262973 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.262979 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.262985 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.262992 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.262998 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.263004 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.263010 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.263016 | orchestrator | 2026-01-07 00:32:53.263022 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-07 00:32:53.263028 | orchestrator | Wednesday 07 January 2026 00:32:44 +0000 (0:00:01.316) 0:07:54.252 ***** 2026-01-07 00:32:53.263034 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.263041 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.263047 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.263053 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.263059 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.263065 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.263071 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.263077 | orchestrator | 2026-01-07 00:32:53.263083 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-07 00:32:53.263089 | orchestrator | Wednesday 07 January 2026 00:32:46 +0000 (0:00:01.867) 0:07:56.120 ***** 2026-01-07 00:32:53.263096 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.263102 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.263108 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.263114 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.263120 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.263126 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.263132 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.263138 | orchestrator | 2026-01-07 00:32:53.263144 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-07 00:32:53.263150 | orchestrator | Wednesday 07 January 2026 00:32:47 +0000 (0:00:01.229) 0:07:57.349 ***** 2026-01-07 00:32:53.263157 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.263163 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.263169 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.263175 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.263181 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.263187 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.263193 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.263199 | orchestrator | 2026-01-07 00:32:53.263210 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-07 00:32:53.263216 | orchestrator | 2026-01-07 00:32:53.263222 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-07 00:32:53.263228 | orchestrator | Wednesday 07 January 2026 00:32:48 +0000 (0:00:01.148) 0:07:58.498 ***** 2026-01-07 00:32:53.263235 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:53.263241 | orchestrator | 2026-01-07 00:32:53.263248 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:32:53.263254 | orchestrator | Wednesday 07 January 2026 00:32:49 +0000 (0:00:00.780) 0:07:59.278 ***** 2026-01-07 00:32:53.263265 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:53.263271 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:53.263277 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:53.263283 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:53.263290 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:53.263296 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:53.263302 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:53.263308 | orchestrator | 2026-01-07 00:32:53.263314 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:32:53.263320 | orchestrator | Wednesday 07 January 2026 00:32:50 +0000 (0:00:01.008) 0:08:00.286 ***** 2026-01-07 00:32:53.263326 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:53.263333 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:53.263339 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:53.263345 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:53.263351 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:53.263357 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:53.263363 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:53.263369 | orchestrator | 2026-01-07 00:32:53.263376 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-07 00:32:53.263382 | orchestrator | Wednesday 07 January 2026 00:32:51 +0000 (0:00:01.190) 0:08:01.477 ***** 2026-01-07 00:32:53.263388 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:53.263394 | orchestrator | 2026-01-07 00:32:53.263401 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:32:53.263407 | orchestrator | Wednesday 07 January 2026 00:32:52 +0000 (0:00:00.939) 0:08:02.417 ***** 2026-01-07 00:32:53.263413 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:53.263419 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:53.263425 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:53.263431 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:53.263437 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:53.263444 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:53.263450 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:53.263456 | orchestrator | 2026-01-07 00:32:53.263512 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:32:54.729575 | orchestrator | Wednesday 07 January 2026 00:32:53 +0000 (0:00:00.859) 0:08:03.276 ***** 2026-01-07 00:32:54.729687 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:54.729703 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:54.729715 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:54.729726 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:54.729737 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:54.729748 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:54.729774 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:54.729796 | orchestrator | 2026-01-07 00:32:54.729809 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:32:54.729821 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-07 00:32:54.729836 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:32:54.729853 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:32:54.729871 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:32:54.729888 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-07 00:32:54.729947 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:32:54.729967 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:32:54.729978 | orchestrator | 2026-01-07 00:32:54.729989 | orchestrator | 2026-01-07 00:32:54.730000 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:32:54.730011 | orchestrator | Wednesday 07 January 2026 00:32:54 +0000 (0:00:01.091) 0:08:04.368 ***** 2026-01-07 00:32:54.730124 | orchestrator | =============================================================================== 2026-01-07 00:32:54.730138 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.80s 2026-01-07 00:32:54.730151 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.90s 2026-01-07 00:32:54.730163 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.26s 2026-01-07 00:32:54.730177 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.24s 2026-01-07 00:32:54.730205 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.53s 2026-01-07 00:32:54.730218 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.52s 2026-01-07 00:32:54.730230 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.19s 2026-01-07 00:32:54.730244 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.95s 2026-01-07 00:32:54.730255 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.73s 2026-01-07 00:32:54.730266 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 9.71s 2026-01-07 00:32:54.730277 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.69s 2026-01-07 00:32:54.730288 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.98s 2026-01-07 00:32:54.730299 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.65s 2026-01-07 00:32:54.730309 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.61s 2026-01-07 00:32:54.730320 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.04s 2026-01-07 00:32:54.730332 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.98s 2026-01-07 00:32:54.730343 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.13s 2026-01-07 00:32:54.730354 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.59s 2026-01-07 00:32:54.730364 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.61s 2026-01-07 00:32:54.730375 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.53s 2026-01-07 00:32:55.001406 | orchestrator | + osism apply fail2ban 2026-01-07 00:33:07.468980 | orchestrator | 2026-01-07 00:33:07 | INFO  | Task cb9fa080-d77c-48ad-bb4b-b95c538214ab (fail2ban) was prepared for execution. 2026-01-07 00:33:07.469098 | orchestrator | 2026-01-07 00:33:07 | INFO  | It takes a moment until task cb9fa080-d77c-48ad-bb4b-b95c538214ab (fail2ban) has been started and output is visible here. 2026-01-07 00:33:29.102118 | orchestrator | 2026-01-07 00:33:29.102208 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-07 00:33:29.102215 | orchestrator | 2026-01-07 00:33:29.102221 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-07 00:33:29.102225 | orchestrator | Wednesday 07 January 2026 00:33:11 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-01-07 00:33:29.102230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:33:29.102255 | orchestrator | 2026-01-07 00:33:29.102259 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-07 00:33:29.102263 | orchestrator | Wednesday 07 January 2026 00:33:12 +0000 (0:00:00.970) 0:00:01.208 ***** 2026-01-07 00:33:29.102267 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:29.102273 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:29.102342 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:29.102347 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:29.102352 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:29.102355 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:29.102359 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:29.102363 | orchestrator | 2026-01-07 00:33:29.102367 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-07 00:33:29.102371 | orchestrator | Wednesday 07 January 2026 00:33:24 +0000 (0:00:11.594) 0:00:12.803 ***** 2026-01-07 00:33:29.102375 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:29.102379 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:29.102383 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:29.102387 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:29.102390 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:29.102394 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:29.102398 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:29.102402 | orchestrator | 2026-01-07 00:33:29.102427 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-07 00:33:29.102432 | orchestrator | Wednesday 07 January 2026 00:33:25 +0000 (0:00:01.474) 0:00:14.277 ***** 2026-01-07 00:33:29.102436 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:29.102441 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:29.102445 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:29.102448 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:29.102452 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:29.102456 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:29.102460 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:29.102463 | orchestrator | 2026-01-07 00:33:29.102467 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-07 00:33:29.102471 | orchestrator | Wednesday 07 January 2026 00:33:27 +0000 (0:00:01.442) 0:00:15.720 ***** 2026-01-07 00:33:29.102475 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:29.102479 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:29.102483 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:29.102487 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:29.102490 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:29.102494 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:29.102498 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:29.102502 | orchestrator | 2026-01-07 00:33:29.102506 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:33:29.102510 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102536 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102545 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102549 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102553 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102557 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102560 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:33:29.102570 | orchestrator | 2026-01-07 00:33:29.102574 | orchestrator | 2026-01-07 00:33:29.102578 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:33:29.102582 | orchestrator | Wednesday 07 January 2026 00:33:28 +0000 (0:00:01.557) 0:00:17.277 ***** 2026-01-07 00:33:29.102586 | orchestrator | =============================================================================== 2026-01-07 00:33:29.102589 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.59s 2026-01-07 00:33:29.102593 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.56s 2026-01-07 00:33:29.102597 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.47s 2026-01-07 00:33:29.102601 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-01-07 00:33:29.102604 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.97s 2026-01-07 00:33:29.370930 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-07 00:33:29.371007 | orchestrator | + osism apply network 2026-01-07 00:33:41.400999 | orchestrator | 2026-01-07 00:33:41 | INFO  | Task f78534d9-0a54-425f-82e1-cb9c45aa003b (network) was prepared for execution. 2026-01-07 00:33:41.401114 | orchestrator | 2026-01-07 00:33:41 | INFO  | It takes a moment until task f78534d9-0a54-425f-82e1-cb9c45aa003b (network) has been started and output is visible here. 2026-01-07 00:34:08.889096 | orchestrator | 2026-01-07 00:34:08.889257 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-07 00:34:08.889280 | orchestrator | 2026-01-07 00:34:08.889293 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-07 00:34:08.889305 | orchestrator | Wednesday 07 January 2026 00:33:45 +0000 (0:00:00.185) 0:00:00.185 ***** 2026-01-07 00:34:08.889316 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.889392 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.889406 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.889418 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.889429 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.889440 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.889451 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.889462 | orchestrator | 2026-01-07 00:34:08.889474 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-07 00:34:08.889486 | orchestrator | Wednesday 07 January 2026 00:33:45 +0000 (0:00:00.516) 0:00:00.701 ***** 2026-01-07 00:34:08.889498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:34:08.889512 | orchestrator | 2026-01-07 00:34:08.889523 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-07 00:34:08.889535 | orchestrator | Wednesday 07 January 2026 00:33:46 +0000 (0:00:00.928) 0:00:01.630 ***** 2026-01-07 00:34:08.889545 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.889556 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.889568 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.889579 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.889590 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.889601 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.889614 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.889628 | orchestrator | 2026-01-07 00:34:08.889641 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-07 00:34:08.889653 | orchestrator | Wednesday 07 January 2026 00:33:48 +0000 (0:00:02.071) 0:00:03.702 ***** 2026-01-07 00:34:08.889667 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.889680 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.889693 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.889706 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.889719 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.889758 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.889770 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.889783 | orchestrator | 2026-01-07 00:34:08.889796 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-07 00:34:08.889810 | orchestrator | Wednesday 07 January 2026 00:33:50 +0000 (0:00:01.795) 0:00:05.497 ***** 2026-01-07 00:34:08.889823 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-07 00:34:08.889836 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-07 00:34:08.889849 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-07 00:34:08.889862 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-07 00:34:08.889874 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-07 00:34:08.889887 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-07 00:34:08.889897 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-07 00:34:08.889908 | orchestrator | 2026-01-07 00:34:08.889919 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-07 00:34:08.889931 | orchestrator | Wednesday 07 January 2026 00:33:51 +0000 (0:00:00.906) 0:00:06.403 ***** 2026-01-07 00:34:08.889942 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:34:08.889955 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:34:08.889966 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:34:08.889977 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:34:08.889988 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:34:08.889999 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:34:08.890010 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:34:08.890090 | orchestrator | 2026-01-07 00:34:08.890102 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-07 00:34:08.890113 | orchestrator | Wednesday 07 January 2026 00:33:54 +0000 (0:00:03.217) 0:00:09.621 ***** 2026-01-07 00:34:08.890124 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:08.890136 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:08.890147 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:08.890158 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:08.890169 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:08.890180 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:08.890191 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:08.890202 | orchestrator | 2026-01-07 00:34:08.890213 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-07 00:34:08.890224 | orchestrator | Wednesday 07 January 2026 00:33:56 +0000 (0:00:01.676) 0:00:11.297 ***** 2026-01-07 00:34:08.890235 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:34:08.890246 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:34:08.890257 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:34:08.890268 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:34:08.890279 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:34:08.890290 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:34:08.890301 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:34:08.890312 | orchestrator | 2026-01-07 00:34:08.890349 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-07 00:34:08.890363 | orchestrator | Wednesday 07 January 2026 00:33:58 +0000 (0:00:01.721) 0:00:13.018 ***** 2026-01-07 00:34:08.890374 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.890385 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.890396 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.890407 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.890418 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.890428 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.890439 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.890450 | orchestrator | 2026-01-07 00:34:08.890461 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-07 00:34:08.890493 | orchestrator | Wednesday 07 January 2026 00:33:59 +0000 (0:00:01.153) 0:00:14.172 ***** 2026-01-07 00:34:08.890514 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:08.890526 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:08.890537 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:08.890560 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:08.890571 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:08.890582 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:08.890593 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:08.890604 | orchestrator | 2026-01-07 00:34:08.890615 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-07 00:34:08.890626 | orchestrator | Wednesday 07 January 2026 00:33:59 +0000 (0:00:00.637) 0:00:14.810 ***** 2026-01-07 00:34:08.890638 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.890649 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.890660 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.890671 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.890682 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.890693 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.890704 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.890715 | orchestrator | 2026-01-07 00:34:08.890726 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-07 00:34:08.890737 | orchestrator | Wednesday 07 January 2026 00:34:02 +0000 (0:00:02.421) 0:00:17.231 ***** 2026-01-07 00:34:08.890748 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:08.890759 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:08.890770 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:08.890781 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:08.890792 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:08.890803 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:08.890815 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-07 00:34:08.890828 | orchestrator | 2026-01-07 00:34:08.890839 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-07 00:34:08.890850 | orchestrator | Wednesday 07 January 2026 00:34:03 +0000 (0:00:00.832) 0:00:18.064 ***** 2026-01-07 00:34:08.890881 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.890892 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:08.890903 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:08.890914 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:08.890926 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:08.890937 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:08.890948 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:08.890959 | orchestrator | 2026-01-07 00:34:08.890970 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-07 00:34:08.890981 | orchestrator | Wednesday 07 January 2026 00:34:04 +0000 (0:00:01.661) 0:00:19.726 ***** 2026-01-07 00:34:08.890992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:34:08.891005 | orchestrator | 2026-01-07 00:34:08.891016 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:34:08.891028 | orchestrator | Wednesday 07 January 2026 00:34:05 +0000 (0:00:01.141) 0:00:20.868 ***** 2026-01-07 00:34:08.891039 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.891050 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.891061 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.891076 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.891088 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.891099 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.891110 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.891121 | orchestrator | 2026-01-07 00:34:08.891132 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-07 00:34:08.891143 | orchestrator | Wednesday 07 January 2026 00:34:06 +0000 (0:00:00.971) 0:00:21.840 ***** 2026-01-07 00:34:08.891161 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:08.891172 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:08.891183 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:08.891194 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:08.891205 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:08.891216 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:08.891227 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:08.891237 | orchestrator | 2026-01-07 00:34:08.891249 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:34:08.891260 | orchestrator | Wednesday 07 January 2026 00:34:07 +0000 (0:00:00.762) 0:00:22.602 ***** 2026-01-07 00:34:08.891271 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891282 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891293 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891304 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891315 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891356 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891368 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891379 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891390 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891401 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891412 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891423 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:34:08.891434 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891445 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:34:08.891456 | orchestrator | 2026-01-07 00:34:08.891474 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-07 00:34:23.284648 | orchestrator | Wednesday 07 January 2026 00:34:08 +0000 (0:00:01.175) 0:00:23.778 ***** 2026-01-07 00:34:23.284742 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:23.284753 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:23.284761 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:23.284767 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:23.284773 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:23.284781 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:23.284788 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:23.284794 | orchestrator | 2026-01-07 00:34:23.284802 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-07 00:34:23.284809 | orchestrator | Wednesday 07 January 2026 00:34:09 +0000 (0:00:00.590) 0:00:24.368 ***** 2026-01-07 00:34:23.284816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2026-01-07 00:34:23.284825 | orchestrator | 2026-01-07 00:34:23.284831 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-07 00:34:23.284838 | orchestrator | Wednesday 07 January 2026 00:34:13 +0000 (0:00:04.002) 0:00:28.371 ***** 2026-01-07 00:34:23.284845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284853 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284918 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.284931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.284987 | orchestrator | 2026-01-07 00:34:23.284993 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-07 00:34:23.285000 | orchestrator | Wednesday 07 January 2026 00:34:18 +0000 (0:00:04.862) 0:00:33.233 ***** 2026-01-07 00:34:23.285006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285019 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285055 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.285062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:34:23.285068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.285075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.285081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.285087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:23.285101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:36.216479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:34:36.216583 | orchestrator | 2026-01-07 00:34:36.216594 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-07 00:34:36.216602 | orchestrator | Wednesday 07 January 2026 00:34:23 +0000 (0:00:04.939) 0:00:38.173 ***** 2026-01-07 00:34:36.216610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:34:36.216787 | orchestrator | 2026-01-07 00:34:36.216794 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:34:36.216801 | orchestrator | Wednesday 07 January 2026 00:34:24 +0000 (0:00:01.045) 0:00:39.218 ***** 2026-01-07 00:34:36.216807 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:36.216815 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:36.216821 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:36.216827 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:36.216833 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:36.216840 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:36.216846 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:36.216853 | orchestrator | 2026-01-07 00:34:36.216859 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:34:36.216866 | orchestrator | Wednesday 07 January 2026 00:34:26 +0000 (0:00:02.022) 0:00:41.241 ***** 2026-01-07 00:34:36.216872 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.216880 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.216886 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.216892 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.216899 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.216906 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.216912 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.216918 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.216924 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.216930 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.216937 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.216955 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.216962 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.216968 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.216974 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.216980 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.216986 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.216993 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.216999 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.217005 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217011 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.217017 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.217023 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.217030 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.217043 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.217051 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.217058 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.217065 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.217072 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217079 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217086 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:34:36.217094 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:34:36.217101 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:34:36.217108 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:34:36.217115 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217122 | orchestrator | 2026-01-07 00:34:36.217130 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-07 00:34:36.217152 | orchestrator | Wednesday 07 January 2026 00:34:27 +0000 (0:00:00.755) 0:00:41.996 ***** 2026-01-07 00:34:36.217160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:34:36.217168 | orchestrator | 2026-01-07 00:34:36.217175 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-07 00:34:36.217182 | orchestrator | Wednesday 07 January 2026 00:34:28 +0000 (0:00:01.049) 0:00:43.046 ***** 2026-01-07 00:34:36.217189 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.217197 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.217203 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.217210 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217218 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217381 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217389 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217397 | orchestrator | 2026-01-07 00:34:36.217404 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-07 00:34:36.217412 | orchestrator | Wednesday 07 January 2026 00:34:28 +0000 (0:00:00.532) 0:00:43.579 ***** 2026-01-07 00:34:36.217419 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.217426 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.217432 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.217439 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217445 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217451 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217457 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217463 | orchestrator | 2026-01-07 00:34:36.217469 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-07 00:34:36.217476 | orchestrator | Wednesday 07 January 2026 00:34:29 +0000 (0:00:00.744) 0:00:44.324 ***** 2026-01-07 00:34:36.217482 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.217488 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.217494 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.217500 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217507 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217513 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217519 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217525 | orchestrator | 2026-01-07 00:34:36.217531 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-07 00:34:36.217538 | orchestrator | Wednesday 07 January 2026 00:34:29 +0000 (0:00:00.575) 0:00:44.899 ***** 2026-01-07 00:34:36.217544 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:36.217557 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:36.217564 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:36.217570 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:36.217576 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:36.217582 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:36.217588 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:36.217595 | orchestrator | 2026-01-07 00:34:36.217601 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-07 00:34:36.217608 | orchestrator | Wednesday 07 January 2026 00:34:31 +0000 (0:00:01.742) 0:00:46.641 ***** 2026-01-07 00:34:36.217618 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:36.217625 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:36.217631 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:36.217637 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:36.217644 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:36.217650 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:36.217656 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:36.217662 | orchestrator | 2026-01-07 00:34:36.217668 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-07 00:34:36.217675 | orchestrator | Wednesday 07 January 2026 00:34:32 +0000 (0:00:00.953) 0:00:47.595 ***** 2026-01-07 00:34:36.217681 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:36.217687 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:36.217693 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:36.217699 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:36.217706 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:36.217712 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:36.217718 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:36.217724 | orchestrator | 2026-01-07 00:34:36.217730 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-07 00:34:36.217737 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:02.265) 0:00:49.860 ***** 2026-01-07 00:34:36.217743 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.217749 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.217755 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.217762 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217768 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217774 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217780 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217786 | orchestrator | 2026-01-07 00:34:36.217793 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-07 00:34:36.217799 | orchestrator | Wednesday 07 January 2026 00:34:35 +0000 (0:00:00.747) 0:00:50.608 ***** 2026-01-07 00:34:36.217806 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:36.217812 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:36.217818 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:36.217824 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:36.217831 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:36.217837 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:36.217843 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:36.217849 | orchestrator | 2026-01-07 00:34:36.217856 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:34:36.217863 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 00:34:36.217871 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.217882 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.564154 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.564422 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.564452 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.564471 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 00:34:36.564488 | orchestrator | 2026-01-07 00:34:36.564504 | orchestrator | 2026-01-07 00:34:36.564522 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:34:36.564539 | orchestrator | Wednesday 07 January 2026 00:34:36 +0000 (0:00:00.501) 0:00:51.109 ***** 2026-01-07 00:34:36.564556 | orchestrator | =============================================================================== 2026-01-07 00:34:36.564573 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.94s 2026-01-07 00:34:36.564590 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.86s 2026-01-07 00:34:36.564607 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.00s 2026-01-07 00:34:36.564624 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.22s 2026-01-07 00:34:36.564641 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.42s 2026-01-07 00:34:36.564662 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.27s 2026-01-07 00:34:36.564683 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.07s 2026-01-07 00:34:36.564749 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.02s 2026-01-07 00:34:36.564778 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2026-01-07 00:34:36.564799 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.74s 2026-01-07 00:34:36.564821 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.72s 2026-01-07 00:34:36.564834 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-01-07 00:34:36.564843 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-01-07 00:34:36.564852 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2026-01-07 00:34:36.564879 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2026-01-07 00:34:36.564888 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.14s 2026-01-07 00:34:36.564897 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.05s 2026-01-07 00:34:36.564907 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.05s 2026-01-07 00:34:36.564916 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2026-01-07 00:34:36.564926 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 0.95s 2026-01-07 00:34:36.838556 | orchestrator | + osism apply wireguard 2026-01-07 00:34:48.876186 | orchestrator | 2026-01-07 00:34:48 | INFO  | Task 333c5dde-6c44-41a3-9bbb-be01d184c450 (wireguard) was prepared for execution. 2026-01-07 00:34:48.876400 | orchestrator | 2026-01-07 00:34:48 | INFO  | It takes a moment until task 333c5dde-6c44-41a3-9bbb-be01d184c450 (wireguard) has been started and output is visible here. 2026-01-07 00:35:05.748361 | orchestrator | 2026-01-07 00:35:05.748503 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-07 00:35:05.748520 | orchestrator | 2026-01-07 00:35:05.748533 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-07 00:35:05.748545 | orchestrator | Wednesday 07 January 2026 00:34:52 +0000 (0:00:00.159) 0:00:00.159 ***** 2026-01-07 00:35:05.748557 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:05.748574 | orchestrator | 2026-01-07 00:35:05.748586 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-07 00:35:05.748631 | orchestrator | Wednesday 07 January 2026 00:34:53 +0000 (0:00:01.065) 0:00:01.224 ***** 2026-01-07 00:35:05.748642 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.748655 | orchestrator | 2026-01-07 00:35:05.748666 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-07 00:35:05.748677 | orchestrator | Wednesday 07 January 2026 00:34:58 +0000 (0:00:05.150) 0:00:06.374 ***** 2026-01-07 00:35:05.748688 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.748699 | orchestrator | 2026-01-07 00:35:05.748710 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-07 00:35:05.748722 | orchestrator | Wednesday 07 January 2026 00:34:59 +0000 (0:00:00.479) 0:00:06.854 ***** 2026-01-07 00:35:05.748733 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.748744 | orchestrator | 2026-01-07 00:35:05.748755 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-07 00:35:05.748766 | orchestrator | Wednesday 07 January 2026 00:34:59 +0000 (0:00:00.378) 0:00:07.232 ***** 2026-01-07 00:35:05.748777 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:05.748788 | orchestrator | 2026-01-07 00:35:05.748801 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-07 00:35:05.748815 | orchestrator | Wednesday 07 January 2026 00:35:00 +0000 (0:00:00.539) 0:00:07.772 ***** 2026-01-07 00:35:05.748829 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:05.748841 | orchestrator | 2026-01-07 00:35:05.748854 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-07 00:35:05.748867 | orchestrator | Wednesday 07 January 2026 00:35:00 +0000 (0:00:00.362) 0:00:08.135 ***** 2026-01-07 00:35:05.748880 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:05.748892 | orchestrator | 2026-01-07 00:35:05.748904 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-07 00:35:05.748918 | orchestrator | Wednesday 07 January 2026 00:35:01 +0000 (0:00:00.355) 0:00:08.490 ***** 2026-01-07 00:35:05.748931 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.748944 | orchestrator | 2026-01-07 00:35:05.748958 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-07 00:35:05.748971 | orchestrator | Wednesday 07 January 2026 00:35:02 +0000 (0:00:01.016) 0:00:09.507 ***** 2026-01-07 00:35:05.748984 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:35:05.748998 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.749011 | orchestrator | 2026-01-07 00:35:05.749024 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-07 00:35:05.749036 | orchestrator | Wednesday 07 January 2026 00:35:02 +0000 (0:00:00.849) 0:00:10.356 ***** 2026-01-07 00:35:05.749049 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.749062 | orchestrator | 2026-01-07 00:35:05.749076 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-07 00:35:05.749089 | orchestrator | Wednesday 07 January 2026 00:35:04 +0000 (0:00:01.635) 0:00:11.992 ***** 2026-01-07 00:35:05.749102 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:05.749115 | orchestrator | 2026-01-07 00:35:05.749128 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:35:05.749142 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:35:05.749157 | orchestrator | 2026-01-07 00:35:05.749168 | orchestrator | 2026-01-07 00:35:05.749179 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:35:05.749190 | orchestrator | Wednesday 07 January 2026 00:35:05 +0000 (0:00:00.877) 0:00:12.869 ***** 2026-01-07 00:35:05.749201 | orchestrator | =============================================================================== 2026-01-07 00:35:05.749230 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.15s 2026-01-07 00:35:05.749241 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2026-01-07 00:35:05.749261 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.07s 2026-01-07 00:35:05.749272 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.02s 2026-01-07 00:35:05.749282 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2026-01-07 00:35:05.749293 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2026-01-07 00:35:05.749305 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-01-07 00:35:05.749316 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.48s 2026-01-07 00:35:05.749326 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-01-07 00:35:05.749337 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.36s 2026-01-07 00:35:05.749348 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.36s 2026-01-07 00:35:06.020938 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-07 00:35:06.050808 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-07 00:35:06.050905 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-07 00:35:06.125366 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 200 0 --:--:-- --:--:-- --:--:-- 202 2026-01-07 00:35:06.136913 | orchestrator | + osism apply --environment custom workarounds 2026-01-07 00:35:08.079455 | orchestrator | 2026-01-07 00:35:08 | INFO  | Trying to run play workarounds in environment custom 2026-01-07 00:35:18.180804 | orchestrator | 2026-01-07 00:35:18 | INFO  | Task f7421942-23bb-40b0-883d-b5b689972fe7 (workarounds) was prepared for execution. 2026-01-07 00:35:18.180931 | orchestrator | 2026-01-07 00:35:18 | INFO  | It takes a moment until task f7421942-23bb-40b0-883d-b5b689972fe7 (workarounds) has been started and output is visible here. 2026-01-07 00:35:42.238612 | orchestrator | 2026-01-07 00:35:42.239547 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:35:42.239584 | orchestrator | 2026-01-07 00:35:42.239597 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-07 00:35:42.239634 | orchestrator | Wednesday 07 January 2026 00:35:22 +0000 (0:00:00.120) 0:00:00.120 ***** 2026-01-07 00:35:42.239647 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239659 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239671 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239681 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239693 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239704 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239715 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-07 00:35:42.239726 | orchestrator | 2026-01-07 00:35:42.239737 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-07 00:35:42.239748 | orchestrator | 2026-01-07 00:35:42.239759 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:35:42.239770 | orchestrator | Wednesday 07 January 2026 00:35:22 +0000 (0:00:00.729) 0:00:00.850 ***** 2026-01-07 00:35:42.239782 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:42.239795 | orchestrator | 2026-01-07 00:35:42.239806 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-07 00:35:42.239817 | orchestrator | 2026-01-07 00:35:42.239828 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:35:42.239839 | orchestrator | Wednesday 07 January 2026 00:35:25 +0000 (0:00:02.369) 0:00:03.219 ***** 2026-01-07 00:35:42.239850 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:42.239885 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:42.239896 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:42.239907 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:42.239918 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:42.239928 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:42.239939 | orchestrator | 2026-01-07 00:35:42.239950 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-07 00:35:42.239961 | orchestrator | 2026-01-07 00:35:42.239971 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-07 00:35:42.239982 | orchestrator | Wednesday 07 January 2026 00:35:27 +0000 (0:00:01.851) 0:00:05.070 ***** 2026-01-07 00:35:42.239994 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240007 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240018 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240029 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240039 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240050 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:35:42.240061 | orchestrator | 2026-01-07 00:35:42.240072 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-07 00:35:42.240129 | orchestrator | Wednesday 07 January 2026 00:35:28 +0000 (0:00:01.524) 0:00:06.595 ***** 2026-01-07 00:35:42.240150 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:42.240170 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:42.240188 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:42.240204 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:42.240215 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:42.240226 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:42.240238 | orchestrator | 2026-01-07 00:35:42.240249 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-07 00:35:42.240267 | orchestrator | Wednesday 07 January 2026 00:35:31 +0000 (0:00:03.106) 0:00:09.702 ***** 2026-01-07 00:35:42.240278 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:42.240289 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:42.240300 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:42.240311 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:42.240322 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:42.240332 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:42.240343 | orchestrator | 2026-01-07 00:35:42.240354 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-07 00:35:42.240365 | orchestrator | 2026-01-07 00:35:42.240376 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-07 00:35:42.240387 | orchestrator | Wednesday 07 January 2026 00:35:32 +0000 (0:00:00.619) 0:00:10.321 ***** 2026-01-07 00:35:42.240398 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:42.240409 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:42.240420 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:42.240430 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:42.240441 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:42.240452 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:42.240462 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:42.240473 | orchestrator | 2026-01-07 00:35:42.240484 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-07 00:35:42.240495 | orchestrator | Wednesday 07 January 2026 00:35:34 +0000 (0:00:01.596) 0:00:11.917 ***** 2026-01-07 00:35:42.240505 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:42.240516 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:42.240526 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:42.240547 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:42.240558 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:42.240568 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:42.240601 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:42.240612 | orchestrator | 2026-01-07 00:35:42.240624 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-07 00:35:42.240635 | orchestrator | Wednesday 07 January 2026 00:35:35 +0000 (0:00:01.459) 0:00:13.376 ***** 2026-01-07 00:35:42.240646 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:42.240657 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:42.240668 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:42.240678 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:42.240689 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:42.240700 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:42.240710 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:42.240721 | orchestrator | 2026-01-07 00:35:42.240732 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-07 00:35:42.240743 | orchestrator | Wednesday 07 January 2026 00:35:37 +0000 (0:00:01.531) 0:00:14.908 ***** 2026-01-07 00:35:42.240754 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:42.240765 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:42.240775 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:42.240786 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:42.240797 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:42.240807 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:42.240818 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:42.240829 | orchestrator | 2026-01-07 00:35:42.240839 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-07 00:35:42.240851 | orchestrator | Wednesday 07 January 2026 00:35:38 +0000 (0:00:01.770) 0:00:16.678 ***** 2026-01-07 00:35:42.240861 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:42.240872 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:42.240883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:42.240893 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:42.240904 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:42.240915 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:42.240925 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:42.240936 | orchestrator | 2026-01-07 00:35:42.240947 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-07 00:35:42.241013 | orchestrator | 2026-01-07 00:35:42.241025 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-07 00:35:42.241036 | orchestrator | Wednesday 07 January 2026 00:35:39 +0000 (0:00:00.554) 0:00:17.233 ***** 2026-01-07 00:35:42.241047 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:42.241058 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:42.241069 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:42.241080 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:42.241256 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:42.241326 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:42.241340 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:42.241351 | orchestrator | 2026-01-07 00:35:42.241363 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:35:42.241375 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:35:42.241388 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241399 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241410 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241435 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241454 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241493 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:35:42.241512 | orchestrator | 2026-01-07 00:35:42.241530 | orchestrator | 2026-01-07 00:35:42.241548 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:35:42.241563 | orchestrator | Wednesday 07 January 2026 00:35:42 +0000 (0:00:02.875) 0:00:20.108 ***** 2026-01-07 00:35:42.241580 | orchestrator | =============================================================================== 2026-01-07 00:35:42.241598 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.11s 2026-01-07 00:35:42.241616 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-01-07 00:35:42.241633 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2026-01-07 00:35:42.241653 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2026-01-07 00:35:42.241671 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2026-01-07 00:35:42.241690 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-01-07 00:35:42.241706 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2026-01-07 00:35:42.241718 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2026-01-07 00:35:42.241728 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.46s 2026-01-07 00:35:42.241739 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2026-01-07 00:35:42.241750 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.62s 2026-01-07 00:35:42.241776 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.55s 2026-01-07 00:35:42.892340 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:35:54.914510 | orchestrator | 2026-01-07 00:35:54 | INFO  | Task 8c4a7f51-c0bc-459c-af48-bb2d038a36ac (reboot) was prepared for execution. 2026-01-07 00:35:54.914627 | orchestrator | 2026-01-07 00:35:54 | INFO  | It takes a moment until task 8c4a7f51-c0bc-459c-af48-bb2d038a36ac (reboot) has been started and output is visible here. 2026-01-07 00:36:04.315059 | orchestrator | 2026-01-07 00:36:04.315182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.315200 | orchestrator | 2026-01-07 00:36:04.315212 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.315224 | orchestrator | Wednesday 07 January 2026 00:35:58 +0000 (0:00:00.149) 0:00:00.149 ***** 2026-01-07 00:36:04.315235 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:04.315247 | orchestrator | 2026-01-07 00:36:04.315258 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.315270 | orchestrator | Wednesday 07 January 2026 00:35:58 +0000 (0:00:00.083) 0:00:00.232 ***** 2026-01-07 00:36:04.315281 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:04.315292 | orchestrator | 2026-01-07 00:36:04.315303 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.315313 | orchestrator | Wednesday 07 January 2026 00:35:59 +0000 (0:00:00.882) 0:00:01.115 ***** 2026-01-07 00:36:04.315324 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:04.315335 | orchestrator | 2026-01-07 00:36:04.315346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.315357 | orchestrator | 2026-01-07 00:36:04.315368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.315403 | orchestrator | Wednesday 07 January 2026 00:35:59 +0000 (0:00:00.092) 0:00:01.207 ***** 2026-01-07 00:36:04.315414 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:04.315425 | orchestrator | 2026-01-07 00:36:04.315436 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.315447 | orchestrator | Wednesday 07 January 2026 00:35:59 +0000 (0:00:00.095) 0:00:01.303 ***** 2026-01-07 00:36:04.315458 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:04.315468 | orchestrator | 2026-01-07 00:36:04.315479 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.315490 | orchestrator | Wednesday 07 January 2026 00:36:00 +0000 (0:00:00.633) 0:00:01.937 ***** 2026-01-07 00:36:04.315501 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:04.315511 | orchestrator | 2026-01-07 00:36:04.315522 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.315535 | orchestrator | 2026-01-07 00:36:04.315549 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.315562 | orchestrator | Wednesday 07 January 2026 00:36:00 +0000 (0:00:00.095) 0:00:02.032 ***** 2026-01-07 00:36:04.315576 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:04.315589 | orchestrator | 2026-01-07 00:36:04.315602 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.315615 | orchestrator | Wednesday 07 January 2026 00:36:00 +0000 (0:00:00.141) 0:00:02.174 ***** 2026-01-07 00:36:04.315628 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:04.315640 | orchestrator | 2026-01-07 00:36:04.315653 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.315666 | orchestrator | Wednesday 07 January 2026 00:36:01 +0000 (0:00:00.655) 0:00:02.830 ***** 2026-01-07 00:36:04.315678 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:04.315691 | orchestrator | 2026-01-07 00:36:04.315705 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.315717 | orchestrator | 2026-01-07 00:36:04.315731 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.315743 | orchestrator | Wednesday 07 January 2026 00:36:01 +0000 (0:00:00.110) 0:00:02.940 ***** 2026-01-07 00:36:04.315756 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:04.315769 | orchestrator | 2026-01-07 00:36:04.315782 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.315811 | orchestrator | Wednesday 07 January 2026 00:36:01 +0000 (0:00:00.087) 0:00:03.028 ***** 2026-01-07 00:36:04.315824 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:04.315837 | orchestrator | 2026-01-07 00:36:04.315850 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.315863 | orchestrator | Wednesday 07 January 2026 00:36:02 +0000 (0:00:00.667) 0:00:03.696 ***** 2026-01-07 00:36:04.315876 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:04.315889 | orchestrator | 2026-01-07 00:36:04.315902 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.315913 | orchestrator | 2026-01-07 00:36:04.315924 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.315935 | orchestrator | Wednesday 07 January 2026 00:36:02 +0000 (0:00:00.110) 0:00:03.806 ***** 2026-01-07 00:36:04.315945 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:04.315956 | orchestrator | 2026-01-07 00:36:04.315967 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.316003 | orchestrator | Wednesday 07 January 2026 00:36:02 +0000 (0:00:00.089) 0:00:03.895 ***** 2026-01-07 00:36:04.316015 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:04.316027 | orchestrator | 2026-01-07 00:36:04.316038 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.316049 | orchestrator | Wednesday 07 January 2026 00:36:03 +0000 (0:00:00.621) 0:00:04.517 ***** 2026-01-07 00:36:04.316059 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:04.316078 | orchestrator | 2026-01-07 00:36:04.316089 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:36:04.316100 | orchestrator | 2026-01-07 00:36:04.316111 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:36:04.316121 | orchestrator | Wednesday 07 January 2026 00:36:03 +0000 (0:00:00.098) 0:00:04.615 ***** 2026-01-07 00:36:04.316132 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:04.316142 | orchestrator | 2026-01-07 00:36:04.316153 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:36:04.316165 | orchestrator | Wednesday 07 January 2026 00:36:03 +0000 (0:00:00.107) 0:00:04.723 ***** 2026-01-07 00:36:04.316183 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:04.316202 | orchestrator | 2026-01-07 00:36:04.316220 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:36:04.316237 | orchestrator | Wednesday 07 January 2026 00:36:03 +0000 (0:00:00.634) 0:00:05.357 ***** 2026-01-07 00:36:04.316278 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:04.316296 | orchestrator | 2026-01-07 00:36:04.316313 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:04.316331 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316349 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316366 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316384 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316403 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316422 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:36:04.316441 | orchestrator | 2026-01-07 00:36:04.316459 | orchestrator | 2026-01-07 00:36:04.316479 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:04.316491 | orchestrator | Wednesday 07 January 2026 00:36:04 +0000 (0:00:00.039) 0:00:05.397 ***** 2026-01-07 00:36:04.316502 | orchestrator | =============================================================================== 2026-01-07 00:36:04.316513 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.10s 2026-01-07 00:36:04.316523 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2026-01-07 00:36:04.316534 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-01-07 00:36:04.593374 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:36:16.629571 | orchestrator | 2026-01-07 00:36:16 | INFO  | Task 3b07747c-8317-4468-b3e9-46465acb3041 (wait-for-connection) was prepared for execution. 2026-01-07 00:36:16.629692 | orchestrator | 2026-01-07 00:36:16 | INFO  | It takes a moment until task 3b07747c-8317-4468-b3e9-46465acb3041 (wait-for-connection) has been started and output is visible here. 2026-01-07 00:36:32.173352 | orchestrator | 2026-01-07 00:36:32.173474 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-07 00:36:32.173492 | orchestrator | 2026-01-07 00:36:32.173505 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-07 00:36:32.173517 | orchestrator | Wednesday 07 January 2026 00:36:20 +0000 (0:00:00.165) 0:00:00.165 ***** 2026-01-07 00:36:32.173530 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:32.173569 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:32.173581 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:32.173593 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:32.173605 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:32.173617 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:32.173628 | orchestrator | 2026-01-07 00:36:32.173656 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:32.173668 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173681 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173692 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173704 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173715 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173726 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:32.173737 | orchestrator | 2026-01-07 00:36:32.173748 | orchestrator | 2026-01-07 00:36:32.173759 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:32.173795 | orchestrator | Wednesday 07 January 2026 00:36:31 +0000 (0:00:11.538) 0:00:11.703 ***** 2026-01-07 00:36:32.173807 | orchestrator | =============================================================================== 2026-01-07 00:36:32.173818 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-01-07 00:36:32.422174 | orchestrator | + osism apply hddtemp 2026-01-07 00:36:44.441412 | orchestrator | 2026-01-07 00:36:44 | INFO  | Task be73f6bf-2e27-4fef-a206-248983a304eb (hddtemp) was prepared for execution. 2026-01-07 00:36:44.441523 | orchestrator | 2026-01-07 00:36:44 | INFO  | It takes a moment until task be73f6bf-2e27-4fef-a206-248983a304eb (hddtemp) has been started and output is visible here. 2026-01-07 00:37:11.918909 | orchestrator | 2026-01-07 00:37:11.919064 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-07 00:37:11.919084 | orchestrator | 2026-01-07 00:37:11.919096 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-07 00:37:11.919108 | orchestrator | Wednesday 07 January 2026 00:36:48 +0000 (0:00:00.184) 0:00:00.184 ***** 2026-01-07 00:37:11.919120 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:11.919133 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:11.919145 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:11.919155 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:11.919167 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:11.919178 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:11.919189 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:11.919200 | orchestrator | 2026-01-07 00:37:11.919211 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-07 00:37:11.919222 | orchestrator | Wednesday 07 January 2026 00:36:48 +0000 (0:00:00.502) 0:00:00.686 ***** 2026-01-07 00:37:11.919235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:11.919250 | orchestrator | 2026-01-07 00:37:11.919261 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-07 00:37:11.919272 | orchestrator | Wednesday 07 January 2026 00:36:49 +0000 (0:00:00.868) 0:00:01.555 ***** 2026-01-07 00:37:11.919284 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:11.919296 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:11.919334 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:11.919345 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:11.919357 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:11.919370 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:11.919383 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:11.919396 | orchestrator | 2026-01-07 00:37:11.919408 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-07 00:37:11.919421 | orchestrator | Wednesday 07 January 2026 00:36:51 +0000 (0:00:01.943) 0:00:03.498 ***** 2026-01-07 00:37:11.919434 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:11.919448 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:11.919460 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:11.919473 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:11.919486 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:11.919498 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:11.919510 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:11.919522 | orchestrator | 2026-01-07 00:37:11.919536 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-07 00:37:11.919548 | orchestrator | Wednesday 07 January 2026 00:36:52 +0000 (0:00:00.999) 0:00:04.498 ***** 2026-01-07 00:37:11.919562 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:11.919575 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:11.919588 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:11.919600 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:11.919613 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:11.919625 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:11.919637 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:11.919649 | orchestrator | 2026-01-07 00:37:11.919662 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-07 00:37:11.919675 | orchestrator | Wednesday 07 January 2026 00:36:54 +0000 (0:00:01.721) 0:00:06.219 ***** 2026-01-07 00:37:11.919687 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:11.919700 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:11.919713 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:11.919724 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:11.919734 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:11.919745 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:11.919779 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:11.919799 | orchestrator | 2026-01-07 00:37:11.919848 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-07 00:37:11.919872 | orchestrator | Wednesday 07 January 2026 00:36:54 +0000 (0:00:00.698) 0:00:06.918 ***** 2026-01-07 00:37:11.919889 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:11.919905 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:11.919922 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:11.919939 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:11.919954 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:11.919969 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:11.919984 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:11.920000 | orchestrator | 2026-01-07 00:37:11.920015 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-07 00:37:11.920031 | orchestrator | Wednesday 07 January 2026 00:37:08 +0000 (0:00:13.666) 0:00:20.585 ***** 2026-01-07 00:37:11.920050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:11.920069 | orchestrator | 2026-01-07 00:37:11.920087 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-07 00:37:11.920105 | orchestrator | Wednesday 07 January 2026 00:37:09 +0000 (0:00:01.155) 0:00:21.740 ***** 2026-01-07 00:37:11.920123 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:11.920141 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:11.920158 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:11.920192 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:11.920212 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:11.920231 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:11.920250 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:11.920268 | orchestrator | 2026-01-07 00:37:11.920283 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:37:11.920295 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:37:11.920330 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920343 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920354 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920364 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920375 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920386 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:37:11.920396 | orchestrator | 2026-01-07 00:37:11.920407 | orchestrator | 2026-01-07 00:37:11.920418 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:37:11.920429 | orchestrator | Wednesday 07 January 2026 00:37:11 +0000 (0:00:01.899) 0:00:23.639 ***** 2026-01-07 00:37:11.920440 | orchestrator | =============================================================================== 2026-01-07 00:37:11.920450 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.67s 2026-01-07 00:37:11.920461 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2026-01-07 00:37:11.920472 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-01-07 00:37:11.920483 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.72s 2026-01-07 00:37:11.920494 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-01-07 00:37:11.920504 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.00s 2026-01-07 00:37:11.920515 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.87s 2026-01-07 00:37:11.920526 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.70s 2026-01-07 00:37:11.920537 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2026-01-07 00:37:12.190665 | orchestrator | ++ semver latest 7.1.1 2026-01-07 00:37:12.240464 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:37:12.240589 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:37:12.240606 | orchestrator | + sudo systemctl restart manager.service 2026-01-07 00:37:49.457340 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:37:49.457457 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:37:49.457473 | orchestrator | + local max_attempts=60 2026-01-07 00:37:49.457486 | orchestrator | + local name=ceph-ansible 2026-01-07 00:37:49.457497 | orchestrator | + local attempt_num=1 2026-01-07 00:37:49.457509 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:37:49.490540 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:37:49.490634 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:37:49.490648 | orchestrator | + sleep 5 2026-01-07 00:37:54.496982 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:37:54.516937 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:37:54.517037 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:37:54.517085 | orchestrator | + sleep 5 2026-01-07 00:37:59.520075 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:37:59.555170 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:37:59.555281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:37:59.555296 | orchestrator | + sleep 5 2026-01-07 00:38:04.560200 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:04.591834 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:04.591903 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:04.591911 | orchestrator | + sleep 5 2026-01-07 00:38:09.595817 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:09.638369 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:09.638467 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:09.638482 | orchestrator | + sleep 5 2026-01-07 00:38:14.643855 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:14.682974 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:14.683073 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:14.683086 | orchestrator | + sleep 5 2026-01-07 00:38:19.686947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:19.725838 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:19.725914 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:19.725928 | orchestrator | + sleep 5 2026-01-07 00:38:24.729974 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:24.754374 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:24.754763 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:24.754785 | orchestrator | + sleep 5 2026-01-07 00:38:29.757564 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:29.772354 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:29.772457 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:29.772473 | orchestrator | + sleep 5 2026-01-07 00:38:34.775737 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:34.811491 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:34.811625 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:34.811641 | orchestrator | + sleep 5 2026-01-07 00:38:39.816761 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:39.851834 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:39.851926 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:39.851942 | orchestrator | + sleep 5 2026-01-07 00:38:44.857168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:44.900919 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:44.901016 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:44.901031 | orchestrator | + sleep 5 2026-01-07 00:38:49.906112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:49.946969 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:49.947067 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:38:49.947082 | orchestrator | + sleep 5 2026-01-07 00:38:54.951468 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:38:54.987623 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:54.987697 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:38:54.987705 | orchestrator | + local max_attempts=60 2026-01-07 00:38:54.987710 | orchestrator | + local name=kolla-ansible 2026-01-07 00:38:54.987716 | orchestrator | + local attempt_num=1 2026-01-07 00:38:54.988794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:38:55.025513 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:55.025614 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:38:55.025628 | orchestrator | + local max_attempts=60 2026-01-07 00:38:55.025638 | orchestrator | + local name=osism-ansible 2026-01-07 00:38:55.025647 | orchestrator | + local attempt_num=1 2026-01-07 00:38:55.026001 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:38:55.055288 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:38:55.055387 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:38:55.055436 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:38:55.215688 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-07 00:38:55.642610 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-07 00:38:55.643137 | orchestrator | + osism apply gather-facts 2026-01-07 00:39:07.799395 | orchestrator | 2026-01-07 00:39:07 | INFO  | Task 3e37dfe9-371c-4395-bc99-306ddce692e4 (gather-facts) was prepared for execution. 2026-01-07 00:39:07.799576 | orchestrator | 2026-01-07 00:39:07 | INFO  | It takes a moment until task 3e37dfe9-371c-4395-bc99-306ddce692e4 (gather-facts) has been started and output is visible here. 2026-01-07 00:39:20.434980 | orchestrator | 2026-01-07 00:39:20.435056 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:39:20.435067 | orchestrator | 2026-01-07 00:39:20.435074 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:39:20.435080 | orchestrator | Wednesday 07 January 2026 00:39:11 +0000 (0:00:00.159) 0:00:00.159 ***** 2026-01-07 00:39:20.435086 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:39:20.435093 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:39:20.435099 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:39:20.435105 | orchestrator | ok: [testbed-manager] 2026-01-07 00:39:20.435111 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:20.435117 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:20.435122 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:20.435128 | orchestrator | 2026-01-07 00:39:20.435134 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:39:20.435140 | orchestrator | 2026-01-07 00:39:20.435146 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:39:20.435152 | orchestrator | Wednesday 07 January 2026 00:39:19 +0000 (0:00:08.522) 0:00:08.682 ***** 2026-01-07 00:39:20.435158 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:39:20.435164 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:20.435170 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:20.435176 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:20.435182 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:20.435188 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:20.435194 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:20.435199 | orchestrator | 2026-01-07 00:39:20.435205 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:20.435211 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435218 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435224 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435230 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435236 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435242 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435248 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:39:20.435254 | orchestrator | 2026-01-07 00:39:20.435260 | orchestrator | 2026-01-07 00:39:20.435266 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:20.435272 | orchestrator | Wednesday 07 January 2026 00:39:20 +0000 (0:00:00.432) 0:00:09.114 ***** 2026-01-07 00:39:20.435277 | orchestrator | =============================================================================== 2026-01-07 00:39:20.435301 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.52s 2026-01-07 00:39:20.435307 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-01-07 00:39:20.617847 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-07 00:39:20.627371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-07 00:39:20.642171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-07 00:39:20.651727 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-07 00:39:20.663528 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-07 00:39:20.673202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-07 00:39:20.683305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-07 00:39:20.690922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-07 00:39:20.699899 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-07 00:39:20.709669 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-07 00:39:20.720186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-07 00:39:20.728782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-07 00:39:20.736655 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-07 00:39:20.745014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-07 00:39:20.754644 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-07 00:39:20.763447 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-07 00:39:20.772993 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-07 00:39:20.783854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-07 00:39:20.793164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-07 00:39:20.802668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-07 00:39:20.813360 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-07 00:39:21.169821 | orchestrator | ok: Runtime: 0:23:59.465500 2026-01-07 00:39:21.289773 | 2026-01-07 00:39:21.289942 | TASK [Deploy services] 2026-01-07 00:39:21.827408 | orchestrator | skipping: Conditional result was False 2026-01-07 00:39:21.847509 | 2026-01-07 00:39:21.847697 | TASK [Deploy in a nutshell] 2026-01-07 00:39:22.603542 | orchestrator | + set -e 2026-01-07 00:39:22.603732 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:39:22.603768 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:39:22.603804 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:39:22.603826 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:39:22.603849 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:39:22.603872 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:39:22.603934 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:39:22.603971 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:39:22.603995 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:39:22.604018 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:39:22.604039 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:39:22.604084 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:39:22.604104 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-07 00:39:22.604126 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-07 00:39:22.604144 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-07 00:39:22.604167 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-07 00:39:22.604184 | orchestrator | 2026-01-07 00:39:22.604205 | orchestrator | # PULL IMAGES 2026-01-07 00:39:22.604223 | orchestrator | 2026-01-07 00:39:22.604241 | orchestrator | ++ export ARA=false 2026-01-07 00:39:22.604260 | orchestrator | ++ ARA=false 2026-01-07 00:39:22.604280 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:39:22.604298 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:39:22.604316 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:39:22.604335 | orchestrator | ++ TEMPEST=true 2026-01-07 00:39:22.604355 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:39:22.604372 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:39:22.604392 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:39:22.604410 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-01-07 00:39:22.604428 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:39:22.604447 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:39:22.604463 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:39:22.604544 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:39:22.604563 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:39:22.604581 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:39:22.604599 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:39:22.604629 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:39:22.604649 | orchestrator | + echo 2026-01-07 00:39:22.604669 | orchestrator | + echo '# PULL IMAGES' 2026-01-07 00:39:22.604687 | orchestrator | + echo 2026-01-07 00:39:22.604717 | orchestrator | ++ semver latest 7.0.0 2026-01-07 00:39:22.644071 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:39:22.644160 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-07 00:39:22.644174 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-07 00:39:24.300619 | orchestrator | 2026-01-07 00:39:24 | INFO  | Trying to run play pull-images in environment custom 2026-01-07 00:39:34.458607 | orchestrator | 2026-01-07 00:39:34 | INFO  | Task 7c8d65d9-1155-47e8-a667-18448d5c824f (pull-images) was prepared for execution. 2026-01-07 00:39:34.458757 | orchestrator | 2026-01-07 00:39:34 | INFO  | Task 7c8d65d9-1155-47e8-a667-18448d5c824f is running in background. No more output. Check ARA for logs. 2026-01-07 00:39:36.613981 | orchestrator | 2026-01-07 00:39:36 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-07 00:39:46.853863 | orchestrator | 2026-01-07 00:39:46 | INFO  | Task d4bb367b-61eb-4a19-8f3e-071d288ef9ca (wipe-partitions) was prepared for execution. 2026-01-07 00:39:46.854009 | orchestrator | 2026-01-07 00:39:46 | INFO  | It takes a moment until task d4bb367b-61eb-4a19-8f3e-071d288ef9ca (wipe-partitions) has been started and output is visible here. 2026-01-07 00:39:58.307257 | orchestrator | 2026-01-07 00:39:58.307366 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-07 00:39:58.307374 | orchestrator | 2026-01-07 00:39:58.307379 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-07 00:39:58.307390 | orchestrator | Wednesday 07 January 2026 00:39:50 +0000 (0:00:00.096) 0:00:00.096 ***** 2026-01-07 00:39:58.307396 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:58.307401 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:58.307435 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:58.307440 | orchestrator | 2026-01-07 00:39:58.307444 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-07 00:39:58.307464 | orchestrator | Wednesday 07 January 2026 00:39:50 +0000 (0:00:00.615) 0:00:00.712 ***** 2026-01-07 00:39:58.307469 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:58.307473 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:58.307480 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:58.307484 | orchestrator | 2026-01-07 00:39:58.307487 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-07 00:39:58.307491 | orchestrator | Wednesday 07 January 2026 00:39:51 +0000 (0:00:00.292) 0:00:01.005 ***** 2026-01-07 00:39:58.307495 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:58.307500 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:58.307504 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:58.307508 | orchestrator | 2026-01-07 00:39:58.307512 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-07 00:39:58.307516 | orchestrator | Wednesday 07 January 2026 00:39:51 +0000 (0:00:00.552) 0:00:01.558 ***** 2026-01-07 00:39:58.307520 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:58.307523 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:58.307527 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:58.307531 | orchestrator | 2026-01-07 00:39:58.307535 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-07 00:39:58.307539 | orchestrator | Wednesday 07 January 2026 00:39:51 +0000 (0:00:00.215) 0:00:01.773 ***** 2026-01-07 00:39:58.307543 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:39:58.307549 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:39:58.307553 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:39:58.307557 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:39:58.307561 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:39:58.307565 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:39:58.307568 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:39:58.307572 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:39:58.307576 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:39:58.307580 | orchestrator | 2026-01-07 00:39:58.307583 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-07 00:39:58.307588 | orchestrator | Wednesday 07 January 2026 00:39:53 +0000 (0:00:01.221) 0:00:02.995 ***** 2026-01-07 00:39:58.307592 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:39:58.307596 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:39:58.307599 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:39:58.307603 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:39:58.307607 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:39:58.307611 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:39:58.307614 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:39:58.307618 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:39:58.307622 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:39:58.307626 | orchestrator | 2026-01-07 00:39:58.307629 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-07 00:39:58.307633 | orchestrator | Wednesday 07 January 2026 00:39:54 +0000 (0:00:01.523) 0:00:04.519 ***** 2026-01-07 00:39:58.307637 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:39:58.307641 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:39:58.307644 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:39:58.307648 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:39:58.307652 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:39:58.307659 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:39:58.307663 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:39:58.307670 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:39:58.307674 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:39:58.307678 | orchestrator | 2026-01-07 00:39:58.307681 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-07 00:39:58.307685 | orchestrator | Wednesday 07 January 2026 00:39:56 +0000 (0:00:02.181) 0:00:06.700 ***** 2026-01-07 00:39:58.307689 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:58.307693 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:58.307697 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:58.307700 | orchestrator | 2026-01-07 00:39:58.307704 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-07 00:39:58.307708 | orchestrator | Wednesday 07 January 2026 00:39:57 +0000 (0:00:00.583) 0:00:07.284 ***** 2026-01-07 00:39:58.307712 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:58.307715 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:58.307719 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:58.307723 | orchestrator | 2026-01-07 00:39:58.307727 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:58.307732 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:58.307737 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:58.307752 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:58.307756 | orchestrator | 2026-01-07 00:39:58.307760 | orchestrator | 2026-01-07 00:39:58.307764 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:58.307768 | orchestrator | Wednesday 07 January 2026 00:39:58 +0000 (0:00:00.629) 0:00:07.913 ***** 2026-01-07 00:39:58.307772 | orchestrator | =============================================================================== 2026-01-07 00:39:58.307777 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-01-07 00:39:58.307784 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.52s 2026-01-07 00:39:58.307792 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-01-07 00:39:58.307800 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-01-07 00:39:58.307805 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2026-01-07 00:39:58.307811 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-01-07 00:39:58.307817 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-01-07 00:39:58.307824 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-01-07 00:39:58.307829 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-01-07 00:40:10.585975 | orchestrator | 2026-01-07 00:40:10 | INFO  | Task 077c895c-1a8e-4570-a369-2c3b329fada7 (facts) was prepared for execution. 2026-01-07 00:40:10.586183 | orchestrator | 2026-01-07 00:40:10 | INFO  | It takes a moment until task 077c895c-1a8e-4570-a369-2c3b329fada7 (facts) has been started and output is visible here. 2026-01-07 00:40:21.853590 | orchestrator | 2026-01-07 00:40:21.853742 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:40:21.853756 | orchestrator | 2026-01-07 00:40:21.853765 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:40:21.853774 | orchestrator | Wednesday 07 January 2026 00:40:14 +0000 (0:00:00.189) 0:00:00.189 ***** 2026-01-07 00:40:21.853782 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:21.853791 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:21.853800 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:21.853838 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:21.853847 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:21.853854 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:21.853862 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:21.853870 | orchestrator | 2026-01-07 00:40:21.853879 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:40:21.853887 | orchestrator | Wednesday 07 January 2026 00:40:15 +0000 (0:00:00.910) 0:00:01.099 ***** 2026-01-07 00:40:21.853895 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:40:21.853904 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:40:21.853912 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:40:21.853919 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:40:21.853927 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:21.853934 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:21.853942 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:40:21.853950 | orchestrator | 2026-01-07 00:40:21.853958 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:40:21.853966 | orchestrator | 2026-01-07 00:40:21.853973 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:40:21.853981 | orchestrator | Wednesday 07 January 2026 00:40:16 +0000 (0:00:01.042) 0:00:02.141 ***** 2026-01-07 00:40:21.853989 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:21.853997 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:21.854006 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:21.854013 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:21.854085 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:21.854094 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:21.854103 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:21.854112 | orchestrator | 2026-01-07 00:40:21.854121 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:40:21.854129 | orchestrator | 2026-01-07 00:40:21.854139 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:40:21.854165 | orchestrator | Wednesday 07 January 2026 00:40:21 +0000 (0:00:04.891) 0:00:07.033 ***** 2026-01-07 00:40:21.854174 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:40:21.854182 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:40:21.854191 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:40:21.854200 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:40:21.854209 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:21.854218 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:21.854226 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:40:21.854235 | orchestrator | 2026-01-07 00:40:21.854244 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:40:21.854253 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854265 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854273 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854282 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854291 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854301 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854310 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:40:21.854318 | orchestrator | 2026-01-07 00:40:21.854337 | orchestrator | 2026-01-07 00:40:21.854347 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:40:21.854356 | orchestrator | Wednesday 07 January 2026 00:40:21 +0000 (0:00:00.431) 0:00:07.465 ***** 2026-01-07 00:40:21.854365 | orchestrator | =============================================================================== 2026-01-07 00:40:21.854391 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.89s 2026-01-07 00:40:21.854401 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-01-07 00:40:21.854410 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.91s 2026-01-07 00:40:21.854419 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-01-07 00:40:23.862750 | orchestrator | 2026-01-07 00:40:23 | INFO  | Task 1e563638-70a3-48da-93cf-0411aed7e0cc (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-07 00:40:23.862923 | orchestrator | 2026-01-07 00:40:23 | INFO  | It takes a moment until task 1e563638-70a3-48da-93cf-0411aed7e0cc (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-07 00:40:34.345180 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:40:34.345297 | orchestrator | 2.16.14 2026-01-07 00:40:34.345312 | orchestrator | 2026-01-07 00:40:34.345323 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:40:34.345334 | orchestrator | 2026-01-07 00:40:34.345347 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:40:34.345422 | orchestrator | Wednesday 07 January 2026 00:40:28 +0000 (0:00:00.293) 0:00:00.293 ***** 2026-01-07 00:40:34.345440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:40:34.345457 | orchestrator | 2026-01-07 00:40:34.345472 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:40:34.345490 | orchestrator | Wednesday 07 January 2026 00:40:28 +0000 (0:00:00.221) 0:00:00.514 ***** 2026-01-07 00:40:34.345507 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:34.345524 | orchestrator | 2026-01-07 00:40:34.345541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345552 | orchestrator | Wednesday 07 January 2026 00:40:28 +0000 (0:00:00.197) 0:00:00.712 ***** 2026-01-07 00:40:34.345563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:40:34.345573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:40:34.345583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:40:34.345593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:40:34.345603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:40:34.345612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:40:34.345622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:40:34.345632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:40:34.345641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:40:34.345651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:40:34.345670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:40:34.345680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:40:34.345692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:40:34.345703 | orchestrator | 2026-01-07 00:40:34.345714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345746 | orchestrator | Wednesday 07 January 2026 00:40:28 +0000 (0:00:00.353) 0:00:01.065 ***** 2026-01-07 00:40:34.345757 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345769 | orchestrator | 2026-01-07 00:40:34.345780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345792 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.162) 0:00:01.228 ***** 2026-01-07 00:40:34.345803 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345814 | orchestrator | 2026-01-07 00:40:34.345826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345837 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.155) 0:00:01.383 ***** 2026-01-07 00:40:34.345848 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345860 | orchestrator | 2026-01-07 00:40:34.345871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345887 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.184) 0:00:01.567 ***** 2026-01-07 00:40:34.345898 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345908 | orchestrator | 2026-01-07 00:40:34.345917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345927 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.177) 0:00:01.745 ***** 2026-01-07 00:40:34.345937 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345946 | orchestrator | 2026-01-07 00:40:34.345956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.345965 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.173) 0:00:01.919 ***** 2026-01-07 00:40:34.345975 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.345984 | orchestrator | 2026-01-07 00:40:34.345994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346003 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:00.175) 0:00:02.094 ***** 2026-01-07 00:40:34.346013 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346087 | orchestrator | 2026-01-07 00:40:34.346098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346108 | orchestrator | Wednesday 07 January 2026 00:40:30 +0000 (0:00:00.171) 0:00:02.266 ***** 2026-01-07 00:40:34.346117 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346127 | orchestrator | 2026-01-07 00:40:34.346137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346146 | orchestrator | Wednesday 07 January 2026 00:40:30 +0000 (0:00:00.178) 0:00:02.444 ***** 2026-01-07 00:40:34.346156 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38) 2026-01-07 00:40:34.346167 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38) 2026-01-07 00:40:34.346177 | orchestrator | 2026-01-07 00:40:34.346187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346216 | orchestrator | Wednesday 07 January 2026 00:40:30 +0000 (0:00:00.360) 0:00:02.804 ***** 2026-01-07 00:40:34.346226 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a) 2026-01-07 00:40:34.346236 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a) 2026-01-07 00:40:34.346246 | orchestrator | 2026-01-07 00:40:34.346255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346265 | orchestrator | Wednesday 07 January 2026 00:40:31 +0000 (0:00:00.483) 0:00:03.288 ***** 2026-01-07 00:40:34.346275 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9) 2026-01-07 00:40:34.346285 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9) 2026-01-07 00:40:34.346294 | orchestrator | 2026-01-07 00:40:34.346304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346322 | orchestrator | Wednesday 07 January 2026 00:40:31 +0000 (0:00:00.519) 0:00:03.808 ***** 2026-01-07 00:40:34.346331 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4) 2026-01-07 00:40:34.346341 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4) 2026-01-07 00:40:34.346432 | orchestrator | 2026-01-07 00:40:34.346447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:34.346456 | orchestrator | Wednesday 07 January 2026 00:40:32 +0000 (0:00:00.732) 0:00:04.540 ***** 2026-01-07 00:40:34.346466 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:40:34.346476 | orchestrator | 2026-01-07 00:40:34.346491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346502 | orchestrator | Wednesday 07 January 2026 00:40:32 +0000 (0:00:00.312) 0:00:04.852 ***** 2026-01-07 00:40:34.346511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:40:34.346521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:40:34.346530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:40:34.346540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:40:34.346550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:40:34.346559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:40:34.346569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:40:34.346578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:40:34.346588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:40:34.346597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:40:34.346607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:40:34.346616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:40:34.346626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:40:34.346636 | orchestrator | 2026-01-07 00:40:34.346646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346655 | orchestrator | Wednesday 07 January 2026 00:40:32 +0000 (0:00:00.370) 0:00:05.222 ***** 2026-01-07 00:40:34.346665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346675 | orchestrator | 2026-01-07 00:40:34.346685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346694 | orchestrator | Wednesday 07 January 2026 00:40:33 +0000 (0:00:00.196) 0:00:05.419 ***** 2026-01-07 00:40:34.346704 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346713 | orchestrator | 2026-01-07 00:40:34.346723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346733 | orchestrator | Wednesday 07 January 2026 00:40:33 +0000 (0:00:00.187) 0:00:05.607 ***** 2026-01-07 00:40:34.346742 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346752 | orchestrator | 2026-01-07 00:40:34.346762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346771 | orchestrator | Wednesday 07 January 2026 00:40:33 +0000 (0:00:00.195) 0:00:05.802 ***** 2026-01-07 00:40:34.346781 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346791 | orchestrator | 2026-01-07 00:40:34.346801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346810 | orchestrator | Wednesday 07 January 2026 00:40:33 +0000 (0:00:00.183) 0:00:05.986 ***** 2026-01-07 00:40:34.346828 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346838 | orchestrator | 2026-01-07 00:40:34.346848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346857 | orchestrator | Wednesday 07 January 2026 00:40:33 +0000 (0:00:00.190) 0:00:06.176 ***** 2026-01-07 00:40:34.346867 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346877 | orchestrator | 2026-01-07 00:40:34.346886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:34.346896 | orchestrator | Wednesday 07 January 2026 00:40:34 +0000 (0:00:00.191) 0:00:06.367 ***** 2026-01-07 00:40:34.346913 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:34.346929 | orchestrator | 2026-01-07 00:40:34.346952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.369677 | orchestrator | Wednesday 07 January 2026 00:40:34 +0000 (0:00:00.195) 0:00:06.563 ***** 2026-01-07 00:40:41.369820 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.369838 | orchestrator | 2026-01-07 00:40:41.369851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.369863 | orchestrator | Wednesday 07 January 2026 00:40:34 +0000 (0:00:00.185) 0:00:06.749 ***** 2026-01-07 00:40:41.369874 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:40:41.369887 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:40:41.369898 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:40:41.369909 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:40:41.369920 | orchestrator | 2026-01-07 00:40:41.369932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.369943 | orchestrator | Wednesday 07 January 2026 00:40:35 +0000 (0:00:00.915) 0:00:07.664 ***** 2026-01-07 00:40:41.369954 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.369965 | orchestrator | 2026-01-07 00:40:41.369976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.369988 | orchestrator | Wednesday 07 January 2026 00:40:35 +0000 (0:00:00.196) 0:00:07.860 ***** 2026-01-07 00:40:41.369999 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370010 | orchestrator | 2026-01-07 00:40:41.370082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.370094 | orchestrator | Wednesday 07 January 2026 00:40:35 +0000 (0:00:00.185) 0:00:08.046 ***** 2026-01-07 00:40:41.370105 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370116 | orchestrator | 2026-01-07 00:40:41.370127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:41.370138 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.190) 0:00:08.236 ***** 2026-01-07 00:40:41.370150 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370161 | orchestrator | 2026-01-07 00:40:41.370173 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:40:41.370186 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.196) 0:00:08.433 ***** 2026-01-07 00:40:41.370199 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:40:41.370211 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:40:41.370224 | orchestrator | 2026-01-07 00:40:41.370259 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:40:41.370273 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.155) 0:00:08.588 ***** 2026-01-07 00:40:41.370287 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370299 | orchestrator | 2026-01-07 00:40:41.370312 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:40:41.370325 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.133) 0:00:08.721 ***** 2026-01-07 00:40:41.370337 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370373 | orchestrator | 2026-01-07 00:40:41.370386 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:40:41.370421 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.125) 0:00:08.847 ***** 2026-01-07 00:40:41.370434 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370446 | orchestrator | 2026-01-07 00:40:41.370471 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:40:41.370484 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.127) 0:00:08.975 ***** 2026-01-07 00:40:41.370509 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:41.370522 | orchestrator | 2026-01-07 00:40:41.370535 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:40:41.370546 | orchestrator | Wednesday 07 January 2026 00:40:36 +0000 (0:00:00.135) 0:00:09.110 ***** 2026-01-07 00:40:41.370558 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23474997-0e8b-5abe-afd2-a58c42930ca8'}}) 2026-01-07 00:40:41.370570 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18b58870-6028-5d13-8db0-fb505e00be4b'}}) 2026-01-07 00:40:41.370581 | orchestrator | 2026-01-07 00:40:41.370591 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:40:41.370603 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.144) 0:00:09.254 ***** 2026-01-07 00:40:41.370615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23474997-0e8b-5abe-afd2-a58c42930ca8'}})  2026-01-07 00:40:41.370635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18b58870-6028-5d13-8db0-fb505e00be4b'}})  2026-01-07 00:40:41.370646 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370657 | orchestrator | 2026-01-07 00:40:41.370668 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:40:41.370679 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.169) 0:00:09.424 ***** 2026-01-07 00:40:41.370690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23474997-0e8b-5abe-afd2-a58c42930ca8'}})  2026-01-07 00:40:41.370701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18b58870-6028-5d13-8db0-fb505e00be4b'}})  2026-01-07 00:40:41.370712 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370723 | orchestrator | 2026-01-07 00:40:41.370733 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:40:41.370745 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.316) 0:00:09.740 ***** 2026-01-07 00:40:41.370755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23474997-0e8b-5abe-afd2-a58c42930ca8'}})  2026-01-07 00:40:41.370787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18b58870-6028-5d13-8db0-fb505e00be4b'}})  2026-01-07 00:40:41.370799 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370810 | orchestrator | 2026-01-07 00:40:41.370821 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:40:41.370840 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.146) 0:00:09.887 ***** 2026-01-07 00:40:41.370852 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:41.370863 | orchestrator | 2026-01-07 00:40:41.370874 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:40:41.370885 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.134) 0:00:10.021 ***** 2026-01-07 00:40:41.370896 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:41.370907 | orchestrator | 2026-01-07 00:40:41.370918 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:40:41.370929 | orchestrator | Wednesday 07 January 2026 00:40:37 +0000 (0:00:00.125) 0:00:10.147 ***** 2026-01-07 00:40:41.370939 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.370950 | orchestrator | 2026-01-07 00:40:41.370961 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:40:41.370972 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.128) 0:00:10.275 ***** 2026-01-07 00:40:41.370992 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.371003 | orchestrator | 2026-01-07 00:40:41.371014 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:40:41.371025 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.131) 0:00:10.407 ***** 2026-01-07 00:40:41.371036 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.371046 | orchestrator | 2026-01-07 00:40:41.371057 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:40:41.371068 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.137) 0:00:10.544 ***** 2026-01-07 00:40:41.371079 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:40:41.371090 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:40:41.371101 | orchestrator |  "sdb": { 2026-01-07 00:40:41.371112 | orchestrator |  "osd_lvm_uuid": "23474997-0e8b-5abe-afd2-a58c42930ca8" 2026-01-07 00:40:41.371124 | orchestrator |  }, 2026-01-07 00:40:41.371135 | orchestrator |  "sdc": { 2026-01-07 00:40:41.371145 | orchestrator |  "osd_lvm_uuid": "18b58870-6028-5d13-8db0-fb505e00be4b" 2026-01-07 00:40:41.371157 | orchestrator |  } 2026-01-07 00:40:41.371168 | orchestrator |  } 2026-01-07 00:40:41.371179 | orchestrator | } 2026-01-07 00:40:41.371190 | orchestrator | 2026-01-07 00:40:41.371201 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:40:41.371212 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.127) 0:00:10.672 ***** 2026-01-07 00:40:41.371223 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.371234 | orchestrator | 2026-01-07 00:40:41.371245 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:40:41.371256 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.116) 0:00:10.789 ***** 2026-01-07 00:40:41.371267 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.371278 | orchestrator | 2026-01-07 00:40:41.371289 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:40:41.371300 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.121) 0:00:10.910 ***** 2026-01-07 00:40:41.371311 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:41.371321 | orchestrator | 2026-01-07 00:40:41.371332 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:40:41.371385 | orchestrator | Wednesday 07 January 2026 00:40:38 +0000 (0:00:00.121) 0:00:11.032 ***** 2026-01-07 00:40:41.371398 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:40:41.371409 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:40:41.371420 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:40:41.371430 | orchestrator |  "sdb": { 2026-01-07 00:40:41.371441 | orchestrator |  "osd_lvm_uuid": "23474997-0e8b-5abe-afd2-a58c42930ca8" 2026-01-07 00:40:41.371452 | orchestrator |  }, 2026-01-07 00:40:41.371463 | orchestrator |  "sdc": { 2026-01-07 00:40:41.371474 | orchestrator |  "osd_lvm_uuid": "18b58870-6028-5d13-8db0-fb505e00be4b" 2026-01-07 00:40:41.371485 | orchestrator |  } 2026-01-07 00:40:41.371495 | orchestrator |  }, 2026-01-07 00:40:41.371506 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:40:41.371517 | orchestrator |  { 2026-01-07 00:40:41.371528 | orchestrator |  "data": "osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8", 2026-01-07 00:40:41.371539 | orchestrator |  "data_vg": "ceph-23474997-0e8b-5abe-afd2-a58c42930ca8" 2026-01-07 00:40:41.371550 | orchestrator |  }, 2026-01-07 00:40:41.371560 | orchestrator |  { 2026-01-07 00:40:41.371571 | orchestrator |  "data": "osd-block-18b58870-6028-5d13-8db0-fb505e00be4b", 2026-01-07 00:40:41.371582 | orchestrator |  "data_vg": "ceph-18b58870-6028-5d13-8db0-fb505e00be4b" 2026-01-07 00:40:41.371599 | orchestrator |  } 2026-01-07 00:40:41.371610 | orchestrator |  ] 2026-01-07 00:40:41.371621 | orchestrator |  } 2026-01-07 00:40:41.371648 | orchestrator | } 2026-01-07 00:40:41.371659 | orchestrator | 2026-01-07 00:40:41.371670 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:40:41.371681 | orchestrator | Wednesday 07 January 2026 00:40:39 +0000 (0:00:00.349) 0:00:11.382 ***** 2026-01-07 00:40:41.371692 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:40:41.371703 | orchestrator | 2026-01-07 00:40:41.371714 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:40:41.371724 | orchestrator | 2026-01-07 00:40:41.371735 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:40:41.371746 | orchestrator | Wednesday 07 January 2026 00:40:40 +0000 (0:00:01.740) 0:00:13.122 ***** 2026-01-07 00:40:41.371757 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:40:41.371768 | orchestrator | 2026-01-07 00:40:41.371778 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:40:41.371790 | orchestrator | Wednesday 07 January 2026 00:40:41 +0000 (0:00:00.240) 0:00:13.363 ***** 2026-01-07 00:40:41.371800 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:41.371811 | orchestrator | 2026-01-07 00:40:41.371830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.700704 | orchestrator | Wednesday 07 January 2026 00:40:41 +0000 (0:00:00.227) 0:00:13.590 ***** 2026-01-07 00:40:48.700842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:40:48.700859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:40:48.700871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:40:48.700882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:40:48.700893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:40:48.700904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:40:48.700915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:40:48.700926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:40:48.700937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:40:48.700948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:40:48.700959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:40:48.700975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:40:48.700987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:40:48.700999 | orchestrator | 2026-01-07 00:40:48.701011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701022 | orchestrator | Wednesday 07 January 2026 00:40:41 +0000 (0:00:00.369) 0:00:13.960 ***** 2026-01-07 00:40:48.701034 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701046 | orchestrator | 2026-01-07 00:40:48.701058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701068 | orchestrator | Wednesday 07 January 2026 00:40:41 +0000 (0:00:00.180) 0:00:14.141 ***** 2026-01-07 00:40:48.701079 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701090 | orchestrator | 2026-01-07 00:40:48.701101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701112 | orchestrator | Wednesday 07 January 2026 00:40:42 +0000 (0:00:00.179) 0:00:14.321 ***** 2026-01-07 00:40:48.701123 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701134 | orchestrator | 2026-01-07 00:40:48.701145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701183 | orchestrator | Wednesday 07 January 2026 00:40:42 +0000 (0:00:00.169) 0:00:14.490 ***** 2026-01-07 00:40:48.701195 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701206 | orchestrator | 2026-01-07 00:40:48.701219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701231 | orchestrator | Wednesday 07 January 2026 00:40:42 +0000 (0:00:00.177) 0:00:14.668 ***** 2026-01-07 00:40:48.701243 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701255 | orchestrator | 2026-01-07 00:40:48.701268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701281 | orchestrator | Wednesday 07 January 2026 00:40:42 +0000 (0:00:00.520) 0:00:15.188 ***** 2026-01-07 00:40:48.701293 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701306 | orchestrator | 2026-01-07 00:40:48.701369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701384 | orchestrator | Wednesday 07 January 2026 00:40:43 +0000 (0:00:00.192) 0:00:15.381 ***** 2026-01-07 00:40:48.701397 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701416 | orchestrator | 2026-01-07 00:40:48.701435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701475 | orchestrator | Wednesday 07 January 2026 00:40:43 +0000 (0:00:00.197) 0:00:15.578 ***** 2026-01-07 00:40:48.701498 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.701516 | orchestrator | 2026-01-07 00:40:48.701534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701551 | orchestrator | Wednesday 07 January 2026 00:40:43 +0000 (0:00:00.190) 0:00:15.769 ***** 2026-01-07 00:40:48.701570 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c) 2026-01-07 00:40:48.701590 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c) 2026-01-07 00:40:48.701608 | orchestrator | 2026-01-07 00:40:48.701626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701645 | orchestrator | Wednesday 07 January 2026 00:40:43 +0000 (0:00:00.394) 0:00:16.163 ***** 2026-01-07 00:40:48.701659 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83) 2026-01-07 00:40:48.701670 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83) 2026-01-07 00:40:48.701681 | orchestrator | 2026-01-07 00:40:48.701692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701702 | orchestrator | Wednesday 07 January 2026 00:40:44 +0000 (0:00:00.394) 0:00:16.557 ***** 2026-01-07 00:40:48.701713 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d) 2026-01-07 00:40:48.701724 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d) 2026-01-07 00:40:48.701734 | orchestrator | 2026-01-07 00:40:48.701745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701777 | orchestrator | Wednesday 07 January 2026 00:40:44 +0000 (0:00:00.400) 0:00:16.958 ***** 2026-01-07 00:40:48.701788 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8) 2026-01-07 00:40:48.701799 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8) 2026-01-07 00:40:48.701810 | orchestrator | 2026-01-07 00:40:48.701822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:48.701833 | orchestrator | Wednesday 07 January 2026 00:40:45 +0000 (0:00:00.411) 0:00:17.370 ***** 2026-01-07 00:40:48.701843 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:40:48.701854 | orchestrator | 2026-01-07 00:40:48.701865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.701876 | orchestrator | Wednesday 07 January 2026 00:40:45 +0000 (0:00:00.319) 0:00:17.689 ***** 2026-01-07 00:40:48.701900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:40:48.701911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:40:48.701922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:40:48.701933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:40:48.701944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:40:48.701954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:40:48.701965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:40:48.701976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:40:48.701987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:40:48.701997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:40:48.702008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:40:48.702064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:40:48.702078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:40:48.702088 | orchestrator | 2026-01-07 00:40:48.702099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702110 | orchestrator | Wednesday 07 January 2026 00:40:45 +0000 (0:00:00.367) 0:00:18.056 ***** 2026-01-07 00:40:48.702121 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702132 | orchestrator | 2026-01-07 00:40:48.702143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702164 | orchestrator | Wednesday 07 January 2026 00:40:46 +0000 (0:00:00.531) 0:00:18.587 ***** 2026-01-07 00:40:48.702176 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702187 | orchestrator | 2026-01-07 00:40:48.702198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702209 | orchestrator | Wednesday 07 January 2026 00:40:46 +0000 (0:00:00.192) 0:00:18.780 ***** 2026-01-07 00:40:48.702220 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702231 | orchestrator | 2026-01-07 00:40:48.702242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702253 | orchestrator | Wednesday 07 January 2026 00:40:46 +0000 (0:00:00.192) 0:00:18.972 ***** 2026-01-07 00:40:48.702264 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702275 | orchestrator | 2026-01-07 00:40:48.702286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702297 | orchestrator | Wednesday 07 January 2026 00:40:46 +0000 (0:00:00.188) 0:00:19.161 ***** 2026-01-07 00:40:48.702308 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702318 | orchestrator | 2026-01-07 00:40:48.702360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702373 | orchestrator | Wednesday 07 January 2026 00:40:47 +0000 (0:00:00.195) 0:00:19.356 ***** 2026-01-07 00:40:48.702384 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702395 | orchestrator | 2026-01-07 00:40:48.702406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702417 | orchestrator | Wednesday 07 January 2026 00:40:47 +0000 (0:00:00.192) 0:00:19.549 ***** 2026-01-07 00:40:48.702428 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702438 | orchestrator | 2026-01-07 00:40:48.702449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702460 | orchestrator | Wednesday 07 January 2026 00:40:47 +0000 (0:00:00.202) 0:00:19.752 ***** 2026-01-07 00:40:48.702479 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:48.702490 | orchestrator | 2026-01-07 00:40:48.702500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702511 | orchestrator | Wednesday 07 January 2026 00:40:47 +0000 (0:00:00.201) 0:00:19.953 ***** 2026-01-07 00:40:48.702522 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:40:48.702534 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:40:48.702545 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:40:48.702556 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:40:48.702567 | orchestrator | 2026-01-07 00:40:48.702578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:48.702589 | orchestrator | Wednesday 07 January 2026 00:40:48 +0000 (0:00:00.770) 0:00:20.723 ***** 2026-01-07 00:40:48.702599 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.829977 | orchestrator | 2026-01-07 00:40:54.830162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:54.830181 | orchestrator | Wednesday 07 January 2026 00:40:48 +0000 (0:00:00.197) 0:00:20.921 ***** 2026-01-07 00:40:54.830193 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830205 | orchestrator | 2026-01-07 00:40:54.830216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:54.830228 | orchestrator | Wednesday 07 January 2026 00:40:48 +0000 (0:00:00.187) 0:00:21.108 ***** 2026-01-07 00:40:54.830239 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830250 | orchestrator | 2026-01-07 00:40:54.830261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:40:54.830272 | orchestrator | Wednesday 07 January 2026 00:40:49 +0000 (0:00:00.195) 0:00:21.304 ***** 2026-01-07 00:40:54.830283 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830294 | orchestrator | 2026-01-07 00:40:54.830305 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:40:54.830316 | orchestrator | Wednesday 07 January 2026 00:40:49 +0000 (0:00:00.614) 0:00:21.919 ***** 2026-01-07 00:40:54.830384 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:40:54.830397 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:40:54.830408 | orchestrator | 2026-01-07 00:40:54.830419 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:40:54.830430 | orchestrator | Wednesday 07 January 2026 00:40:49 +0000 (0:00:00.171) 0:00:22.090 ***** 2026-01-07 00:40:54.830441 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830453 | orchestrator | 2026-01-07 00:40:54.830464 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:40:54.830477 | orchestrator | Wednesday 07 January 2026 00:40:49 +0000 (0:00:00.132) 0:00:22.223 ***** 2026-01-07 00:40:54.830490 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830503 | orchestrator | 2026-01-07 00:40:54.830516 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:40:54.830528 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.127) 0:00:22.350 ***** 2026-01-07 00:40:54.830541 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830554 | orchestrator | 2026-01-07 00:40:54.830566 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:40:54.830578 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.133) 0:00:22.484 ***** 2026-01-07 00:40:54.830591 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:54.830605 | orchestrator | 2026-01-07 00:40:54.830617 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:40:54.830630 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.137) 0:00:22.621 ***** 2026-01-07 00:40:54.830645 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b296d094-78ce-5ce3-9fe3-598726116dc8'}}) 2026-01-07 00:40:54.830658 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73010335-3e9e-51ea-81b3-4dcf5932c07d'}}) 2026-01-07 00:40:54.830697 | orchestrator | 2026-01-07 00:40:54.830710 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:40:54.830723 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.146) 0:00:22.767 ***** 2026-01-07 00:40:54.830736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b296d094-78ce-5ce3-9fe3-598726116dc8'}})  2026-01-07 00:40:54.830769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73010335-3e9e-51ea-81b3-4dcf5932c07d'}})  2026-01-07 00:40:54.830782 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830795 | orchestrator | 2026-01-07 00:40:54.830809 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:40:54.830822 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.150) 0:00:22.918 ***** 2026-01-07 00:40:54.830833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b296d094-78ce-5ce3-9fe3-598726116dc8'}})  2026-01-07 00:40:54.830844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73010335-3e9e-51ea-81b3-4dcf5932c07d'}})  2026-01-07 00:40:54.830855 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830865 | orchestrator | 2026-01-07 00:40:54.830876 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:40:54.830887 | orchestrator | Wednesday 07 January 2026 00:40:50 +0000 (0:00:00.167) 0:00:23.085 ***** 2026-01-07 00:40:54.830898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b296d094-78ce-5ce3-9fe3-598726116dc8'}})  2026-01-07 00:40:54.830909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73010335-3e9e-51ea-81b3-4dcf5932c07d'}})  2026-01-07 00:40:54.830920 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.830930 | orchestrator | 2026-01-07 00:40:54.830941 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:40:54.830952 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.180) 0:00:23.266 ***** 2026-01-07 00:40:54.830963 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:54.830974 | orchestrator | 2026-01-07 00:40:54.830984 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:40:54.830995 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.140) 0:00:23.406 ***** 2026-01-07 00:40:54.831006 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:54.831017 | orchestrator | 2026-01-07 00:40:54.831027 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:40:54.831038 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.136) 0:00:23.543 ***** 2026-01-07 00:40:54.831069 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831082 | orchestrator | 2026-01-07 00:40:54.831093 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:40:54.831103 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.307) 0:00:23.850 ***** 2026-01-07 00:40:54.831114 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831125 | orchestrator | 2026-01-07 00:40:54.831136 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:40:54.831147 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.156) 0:00:24.007 ***** 2026-01-07 00:40:54.831158 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831168 | orchestrator | 2026-01-07 00:40:54.831179 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:40:54.831190 | orchestrator | Wednesday 07 January 2026 00:40:51 +0000 (0:00:00.111) 0:00:24.118 ***** 2026-01-07 00:40:54.831201 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:40:54.831212 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:40:54.831223 | orchestrator |  "sdb": { 2026-01-07 00:40:54.831234 | orchestrator |  "osd_lvm_uuid": "b296d094-78ce-5ce3-9fe3-598726116dc8" 2026-01-07 00:40:54.831254 | orchestrator |  }, 2026-01-07 00:40:54.831266 | orchestrator |  "sdc": { 2026-01-07 00:40:54.831276 | orchestrator |  "osd_lvm_uuid": "73010335-3e9e-51ea-81b3-4dcf5932c07d" 2026-01-07 00:40:54.831287 | orchestrator |  } 2026-01-07 00:40:54.831298 | orchestrator |  } 2026-01-07 00:40:54.831309 | orchestrator | } 2026-01-07 00:40:54.831320 | orchestrator | 2026-01-07 00:40:54.831350 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:40:54.831361 | orchestrator | Wednesday 07 January 2026 00:40:52 +0000 (0:00:00.135) 0:00:24.253 ***** 2026-01-07 00:40:54.831372 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831383 | orchestrator | 2026-01-07 00:40:54.831394 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:40:54.831405 | orchestrator | Wednesday 07 January 2026 00:40:52 +0000 (0:00:00.125) 0:00:24.379 ***** 2026-01-07 00:40:54.831416 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831427 | orchestrator | 2026-01-07 00:40:54.831437 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:40:54.831448 | orchestrator | Wednesday 07 January 2026 00:40:52 +0000 (0:00:00.135) 0:00:24.514 ***** 2026-01-07 00:40:54.831459 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:54.831470 | orchestrator | 2026-01-07 00:40:54.831481 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:40:54.831492 | orchestrator | Wednesday 07 January 2026 00:40:52 +0000 (0:00:00.130) 0:00:24.644 ***** 2026-01-07 00:40:54.831503 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:40:54.831514 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:40:54.831525 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:40:54.831536 | orchestrator |  "sdb": { 2026-01-07 00:40:54.831547 | orchestrator |  "osd_lvm_uuid": "b296d094-78ce-5ce3-9fe3-598726116dc8" 2026-01-07 00:40:54.831558 | orchestrator |  }, 2026-01-07 00:40:54.831569 | orchestrator |  "sdc": { 2026-01-07 00:40:54.831580 | orchestrator |  "osd_lvm_uuid": "73010335-3e9e-51ea-81b3-4dcf5932c07d" 2026-01-07 00:40:54.831591 | orchestrator |  } 2026-01-07 00:40:54.831602 | orchestrator |  }, 2026-01-07 00:40:54.831613 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:40:54.831624 | orchestrator |  { 2026-01-07 00:40:54.831634 | orchestrator |  "data": "osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8", 2026-01-07 00:40:54.831645 | orchestrator |  "data_vg": "ceph-b296d094-78ce-5ce3-9fe3-598726116dc8" 2026-01-07 00:40:54.831656 | orchestrator |  }, 2026-01-07 00:40:54.831667 | orchestrator |  { 2026-01-07 00:40:54.831678 | orchestrator |  "data": "osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d", 2026-01-07 00:40:54.831689 | orchestrator |  "data_vg": "ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d" 2026-01-07 00:40:54.831700 | orchestrator |  } 2026-01-07 00:40:54.831711 | orchestrator |  ] 2026-01-07 00:40:54.831722 | orchestrator |  } 2026-01-07 00:40:54.831732 | orchestrator | } 2026-01-07 00:40:54.831743 | orchestrator | 2026-01-07 00:40:54.831754 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:40:54.831765 | orchestrator | Wednesday 07 January 2026 00:40:52 +0000 (0:00:00.209) 0:00:24.854 ***** 2026-01-07 00:40:54.831776 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:40:54.831787 | orchestrator | 2026-01-07 00:40:54.831798 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:40:54.831808 | orchestrator | 2026-01-07 00:40:54.831819 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:40:54.831830 | orchestrator | Wednesday 07 January 2026 00:40:53 +0000 (0:00:00.990) 0:00:25.844 ***** 2026-01-07 00:40:54.831841 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:40:54.831852 | orchestrator | 2026-01-07 00:40:54.831863 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:40:54.831886 | orchestrator | Wednesday 07 January 2026 00:40:54 +0000 (0:00:00.616) 0:00:26.461 ***** 2026-01-07 00:40:54.831898 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:54.831909 | orchestrator | 2026-01-07 00:40:54.831920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:40:54.831930 | orchestrator | Wednesday 07 January 2026 00:40:54 +0000 (0:00:00.222) 0:00:26.683 ***** 2026-01-07 00:40:54.831941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:40:54.831952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:40:54.831963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:40:54.831974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:40:54.831985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:40:54.832002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:41:01.063153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:41:01.063301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:41:01.063372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:41:01.063386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:41:01.063397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:41:01.063408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:41:01.063419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:41:01.063431 | orchestrator | 2026-01-07 00:41:01.063443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063456 | orchestrator | Wednesday 07 January 2026 00:40:54 +0000 (0:00:00.364) 0:00:27.047 ***** 2026-01-07 00:41:01.063467 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063479 | orchestrator | 2026-01-07 00:41:01.063490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063502 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.188) 0:00:27.236 ***** 2026-01-07 00:41:01.063513 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063523 | orchestrator | 2026-01-07 00:41:01.063534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063545 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.196) 0:00:27.433 ***** 2026-01-07 00:41:01.063556 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063567 | orchestrator | 2026-01-07 00:41:01.063578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063589 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.180) 0:00:27.613 ***** 2026-01-07 00:41:01.063600 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063611 | orchestrator | 2026-01-07 00:41:01.063625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063638 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.148) 0:00:27.762 ***** 2026-01-07 00:41:01.063650 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063668 | orchestrator | 2026-01-07 00:41:01.063687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063707 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.138) 0:00:27.901 ***** 2026-01-07 00:41:01.063726 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063746 | orchestrator | 2026-01-07 00:41:01.063767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063824 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.139) 0:00:28.040 ***** 2026-01-07 00:41:01.063846 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063865 | orchestrator | 2026-01-07 00:41:01.063883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063903 | orchestrator | Wednesday 07 January 2026 00:40:55 +0000 (0:00:00.168) 0:00:28.208 ***** 2026-01-07 00:41:01.063915 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.063926 | orchestrator | 2026-01-07 00:41:01.063938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.063949 | orchestrator | Wednesday 07 January 2026 00:40:56 +0000 (0:00:00.158) 0:00:28.367 ***** 2026-01-07 00:41:01.063963 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e) 2026-01-07 00:41:01.063984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e) 2026-01-07 00:41:01.064002 | orchestrator | 2026-01-07 00:41:01.064020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.064040 | orchestrator | Wednesday 07 January 2026 00:40:56 +0000 (0:00:00.536) 0:00:28.903 ***** 2026-01-07 00:41:01.064060 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb) 2026-01-07 00:41:01.064080 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb) 2026-01-07 00:41:01.064098 | orchestrator | 2026-01-07 00:41:01.064117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.064128 | orchestrator | Wednesday 07 January 2026 00:40:56 +0000 (0:00:00.313) 0:00:29.217 ***** 2026-01-07 00:41:01.064139 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6) 2026-01-07 00:41:01.064150 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6) 2026-01-07 00:41:01.064161 | orchestrator | 2026-01-07 00:41:01.064172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.064182 | orchestrator | Wednesday 07 January 2026 00:40:57 +0000 (0:00:00.354) 0:00:29.571 ***** 2026-01-07 00:41:01.064193 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e) 2026-01-07 00:41:01.064204 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e) 2026-01-07 00:41:01.064214 | orchestrator | 2026-01-07 00:41:01.064225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:41:01.064236 | orchestrator | Wednesday 07 January 2026 00:40:57 +0000 (0:00:00.383) 0:00:29.955 ***** 2026-01-07 00:41:01.064247 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:41:01.064258 | orchestrator | 2026-01-07 00:41:01.064268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064304 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.293) 0:00:30.248 ***** 2026-01-07 00:41:01.064346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:41:01.064359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:41:01.064370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:41:01.064381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:41:01.064399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:41:01.064446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:41:01.064468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:41:01.064486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:41:01.064519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:41:01.064538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:41:01.064557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:41:01.064575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:41:01.064594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:41:01.064613 | orchestrator | 2026-01-07 00:41:01.064633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064651 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.342) 0:00:30.591 ***** 2026-01-07 00:41:01.064668 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064679 | orchestrator | 2026-01-07 00:41:01.064690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064701 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.176) 0:00:30.767 ***** 2026-01-07 00:41:01.064712 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064723 | orchestrator | 2026-01-07 00:41:01.064734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064752 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.143) 0:00:30.911 ***** 2026-01-07 00:41:01.064763 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064774 | orchestrator | 2026-01-07 00:41:01.064786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064797 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.140) 0:00:31.052 ***** 2026-01-07 00:41:01.064808 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064819 | orchestrator | 2026-01-07 00:41:01.064830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064841 | orchestrator | Wednesday 07 January 2026 00:40:58 +0000 (0:00:00.140) 0:00:31.193 ***** 2026-01-07 00:41:01.064852 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064863 | orchestrator | 2026-01-07 00:41:01.064874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064885 | orchestrator | Wednesday 07 January 2026 00:40:59 +0000 (0:00:00.190) 0:00:31.384 ***** 2026-01-07 00:41:01.064896 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064907 | orchestrator | 2026-01-07 00:41:01.064920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.064939 | orchestrator | Wednesday 07 January 2026 00:40:59 +0000 (0:00:00.406) 0:00:31.790 ***** 2026-01-07 00:41:01.064958 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.064975 | orchestrator | 2026-01-07 00:41:01.064994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065012 | orchestrator | Wednesday 07 January 2026 00:40:59 +0000 (0:00:00.150) 0:00:31.940 ***** 2026-01-07 00:41:01.065031 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.065050 | orchestrator | 2026-01-07 00:41:01.065069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065087 | orchestrator | Wednesday 07 January 2026 00:40:59 +0000 (0:00:00.178) 0:00:32.119 ***** 2026-01-07 00:41:01.065106 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:41:01.065126 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:41:01.065147 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:41:01.065165 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:41:01.065184 | orchestrator | 2026-01-07 00:41:01.065196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065207 | orchestrator | Wednesday 07 January 2026 00:41:00 +0000 (0:00:00.535) 0:00:32.655 ***** 2026-01-07 00:41:01.065218 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.065239 | orchestrator | 2026-01-07 00:41:01.065249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065261 | orchestrator | Wednesday 07 January 2026 00:41:00 +0000 (0:00:00.154) 0:00:32.809 ***** 2026-01-07 00:41:01.065271 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.065282 | orchestrator | 2026-01-07 00:41:01.065293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065304 | orchestrator | Wednesday 07 January 2026 00:41:00 +0000 (0:00:00.159) 0:00:32.969 ***** 2026-01-07 00:41:01.065345 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.065358 | orchestrator | 2026-01-07 00:41:01.065369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:41:01.065380 | orchestrator | Wednesday 07 January 2026 00:41:00 +0000 (0:00:00.167) 0:00:33.136 ***** 2026-01-07 00:41:01.065391 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:01.065402 | orchestrator | 2026-01-07 00:41:01.065428 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:41:04.282012 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.143) 0:00:33.280 ***** 2026-01-07 00:41:04.282257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:41:04.282274 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:41:04.282286 | orchestrator | 2026-01-07 00:41:04.282299 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:41:04.282390 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.139) 0:00:33.420 ***** 2026-01-07 00:41:04.282404 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282416 | orchestrator | 2026-01-07 00:41:04.282427 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:41:04.282438 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.093) 0:00:33.514 ***** 2026-01-07 00:41:04.282449 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282461 | orchestrator | 2026-01-07 00:41:04.282472 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:41:04.282483 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.099) 0:00:33.614 ***** 2026-01-07 00:41:04.282494 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282507 | orchestrator | 2026-01-07 00:41:04.282521 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:41:04.282534 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.223) 0:00:33.837 ***** 2026-01-07 00:41:04.282546 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:41:04.282560 | orchestrator | 2026-01-07 00:41:04.282574 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:41:04.282586 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.101) 0:00:33.938 ***** 2026-01-07 00:41:04.282600 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '96f57bfe-16b3-5bb1-823a-e63af6581955'}}) 2026-01-07 00:41:04.282613 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e44d1cae-1e57-574a-aa47-ecf7991dd637'}}) 2026-01-07 00:41:04.282626 | orchestrator | 2026-01-07 00:41:04.282639 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:41:04.282652 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.145) 0:00:34.084 ***** 2026-01-07 00:41:04.282666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '96f57bfe-16b3-5bb1-823a-e63af6581955'}})  2026-01-07 00:41:04.282681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e44d1cae-1e57-574a-aa47-ecf7991dd637'}})  2026-01-07 00:41:04.282694 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282707 | orchestrator | 2026-01-07 00:41:04.282720 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:41:04.282732 | orchestrator | Wednesday 07 January 2026 00:41:01 +0000 (0:00:00.129) 0:00:34.214 ***** 2026-01-07 00:41:04.282775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '96f57bfe-16b3-5bb1-823a-e63af6581955'}})  2026-01-07 00:41:04.282787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e44d1cae-1e57-574a-aa47-ecf7991dd637'}})  2026-01-07 00:41:04.282798 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282809 | orchestrator | 2026-01-07 00:41:04.282820 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:41:04.282831 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.131) 0:00:34.345 ***** 2026-01-07 00:41:04.282863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '96f57bfe-16b3-5bb1-823a-e63af6581955'}})  2026-01-07 00:41:04.282875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e44d1cae-1e57-574a-aa47-ecf7991dd637'}})  2026-01-07 00:41:04.282886 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.282896 | orchestrator | 2026-01-07 00:41:04.282907 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:41:04.282918 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.139) 0:00:34.485 ***** 2026-01-07 00:41:04.282929 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:41:04.282940 | orchestrator | 2026-01-07 00:41:04.282951 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:41:04.282962 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.111) 0:00:34.597 ***** 2026-01-07 00:41:04.282972 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:41:04.282983 | orchestrator | 2026-01-07 00:41:04.282994 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:41:04.283005 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.107) 0:00:34.704 ***** 2026-01-07 00:41:04.283016 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283026 | orchestrator | 2026-01-07 00:41:04.283037 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:41:04.283048 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.096) 0:00:34.800 ***** 2026-01-07 00:41:04.283058 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283069 | orchestrator | 2026-01-07 00:41:04.283080 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:41:04.283091 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.109) 0:00:34.910 ***** 2026-01-07 00:41:04.283102 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283113 | orchestrator | 2026-01-07 00:41:04.283123 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:41:04.283134 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.107) 0:00:35.018 ***** 2026-01-07 00:41:04.283145 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:41:04.283156 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:41:04.283166 | orchestrator |  "sdb": { 2026-01-07 00:41:04.283197 | orchestrator |  "osd_lvm_uuid": "96f57bfe-16b3-5bb1-823a-e63af6581955" 2026-01-07 00:41:04.283209 | orchestrator |  }, 2026-01-07 00:41:04.283221 | orchestrator |  "sdc": { 2026-01-07 00:41:04.283232 | orchestrator |  "osd_lvm_uuid": "e44d1cae-1e57-574a-aa47-ecf7991dd637" 2026-01-07 00:41:04.283244 | orchestrator |  } 2026-01-07 00:41:04.283254 | orchestrator |  } 2026-01-07 00:41:04.283265 | orchestrator | } 2026-01-07 00:41:04.283276 | orchestrator | 2026-01-07 00:41:04.283287 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:41:04.283298 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.098) 0:00:35.117 ***** 2026-01-07 00:41:04.283309 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283342 | orchestrator | 2026-01-07 00:41:04.283353 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:41:04.283364 | orchestrator | Wednesday 07 January 2026 00:41:02 +0000 (0:00:00.087) 0:00:35.204 ***** 2026-01-07 00:41:04.283386 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283397 | orchestrator | 2026-01-07 00:41:04.283408 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:41:04.283419 | orchestrator | Wednesday 07 January 2026 00:41:03 +0000 (0:00:00.213) 0:00:35.418 ***** 2026-01-07 00:41:04.283430 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:41:04.283440 | orchestrator | 2026-01-07 00:41:04.283451 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:41:04.283462 | orchestrator | Wednesday 07 January 2026 00:41:03 +0000 (0:00:00.102) 0:00:35.520 ***** 2026-01-07 00:41:04.283473 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:41:04.283484 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:41:04.283495 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:41:04.283506 | orchestrator |  "sdb": { 2026-01-07 00:41:04.283516 | orchestrator |  "osd_lvm_uuid": "96f57bfe-16b3-5bb1-823a-e63af6581955" 2026-01-07 00:41:04.283527 | orchestrator |  }, 2026-01-07 00:41:04.283538 | orchestrator |  "sdc": { 2026-01-07 00:41:04.283549 | orchestrator |  "osd_lvm_uuid": "e44d1cae-1e57-574a-aa47-ecf7991dd637" 2026-01-07 00:41:04.283560 | orchestrator |  } 2026-01-07 00:41:04.283571 | orchestrator |  }, 2026-01-07 00:41:04.283582 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:41:04.283592 | orchestrator |  { 2026-01-07 00:41:04.283603 | orchestrator |  "data": "osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955", 2026-01-07 00:41:04.283614 | orchestrator |  "data_vg": "ceph-96f57bfe-16b3-5bb1-823a-e63af6581955" 2026-01-07 00:41:04.283625 | orchestrator |  }, 2026-01-07 00:41:04.283636 | orchestrator |  { 2026-01-07 00:41:04.283647 | orchestrator |  "data": "osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637", 2026-01-07 00:41:04.283658 | orchestrator |  "data_vg": "ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637" 2026-01-07 00:41:04.283669 | orchestrator |  } 2026-01-07 00:41:04.283685 | orchestrator |  ] 2026-01-07 00:41:04.283696 | orchestrator |  } 2026-01-07 00:41:04.283707 | orchestrator | } 2026-01-07 00:41:04.283718 | orchestrator | 2026-01-07 00:41:04.283728 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:41:04.283739 | orchestrator | Wednesday 07 January 2026 00:41:03 +0000 (0:00:00.184) 0:00:35.705 ***** 2026-01-07 00:41:04.283750 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:41:04.283761 | orchestrator | 2026-01-07 00:41:04.283771 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:41:04.283783 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:41:04.283796 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:41:04.283807 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:41:04.283818 | orchestrator | 2026-01-07 00:41:04.283828 | orchestrator | 2026-01-07 00:41:04.283839 | orchestrator | 2026-01-07 00:41:04.283850 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:41:04.283860 | orchestrator | Wednesday 07 January 2026 00:41:04 +0000 (0:00:00.778) 0:00:36.483 ***** 2026-01-07 00:41:04.283871 | orchestrator | =============================================================================== 2026-01-07 00:41:04.283882 | orchestrator | Write configuration file ------------------------------------------------ 3.51s 2026-01-07 00:41:04.283893 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-01-07 00:41:04.283904 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-01-07 00:41:04.283915 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.08s 2026-01-07 00:41:04.283935 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-07 00:41:04.283954 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-01-07 00:41:04.283973 | orchestrator | Print configuration data ------------------------------------------------ 0.74s 2026-01-07 00:41:04.284034 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-01-07 00:41:04.284054 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-01-07 00:41:04.284071 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2026-01-07 00:41:04.284088 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-01-07 00:41:04.284106 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-01-07 00:41:04.284122 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2026-01-07 00:41:04.284149 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-01-07 00:41:04.467235 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2026-01-07 00:41:04.467432 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-01-07 00:41:04.467450 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-01-07 00:41:04.467462 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.48s 2026-01-07 00:41:04.467473 | orchestrator | Add known links to the list of available block devices ------------------ 0.48s 2026-01-07 00:41:04.467485 | orchestrator | Print DB devices -------------------------------------------------------- 0.47s 2026-01-07 00:41:26.509073 | orchestrator | 2026-01-07 00:41:26 | INFO  | Task 324a8f16-29ce-4627-9854-59812208549c (sync inventory) is running in background. Output coming soon. 2026-01-07 00:41:51.157524 | orchestrator | 2026-01-07 00:41:27 | INFO  | Starting group_vars file reorganization 2026-01-07 00:41:51.157649 | orchestrator | 2026-01-07 00:41:27 | INFO  | Moved 0 file(s) to their respective directories 2026-01-07 00:41:51.157663 | orchestrator | 2026-01-07 00:41:27 | INFO  | Group_vars file reorganization completed 2026-01-07 00:41:51.157673 | orchestrator | 2026-01-07 00:41:30 | INFO  | Starting variable preparation from inventory 2026-01-07 00:41:51.157683 | orchestrator | 2026-01-07 00:41:33 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-07 00:41:51.157692 | orchestrator | 2026-01-07 00:41:33 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-07 00:41:51.157726 | orchestrator | 2026-01-07 00:41:33 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-07 00:41:51.157736 | orchestrator | 2026-01-07 00:41:33 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-07 00:41:51.157746 | orchestrator | 2026-01-07 00:41:33 | INFO  | Variable preparation completed 2026-01-07 00:41:51.157755 | orchestrator | 2026-01-07 00:41:34 | INFO  | Starting inventory overwrite handling 2026-01-07 00:41:51.157769 | orchestrator | 2026-01-07 00:41:34 | INFO  | Handling group overwrites in 99-overwrite 2026-01-07 00:41:51.157778 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removing group frr:children from 60-generic 2026-01-07 00:41:51.157787 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-07 00:41:51.157796 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-07 00:41:51.157805 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-07 00:41:51.157814 | orchestrator | 2026-01-07 00:41:34 | INFO  | Handling group overwrites in 20-roles 2026-01-07 00:41:51.157849 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-07 00:41:51.157859 | orchestrator | 2026-01-07 00:41:34 | INFO  | Removed 5 group(s) in total 2026-01-07 00:41:51.157868 | orchestrator | 2026-01-07 00:41:34 | INFO  | Inventory overwrite handling completed 2026-01-07 00:41:51.157876 | orchestrator | 2026-01-07 00:41:35 | INFO  | Starting merge of inventory files 2026-01-07 00:41:51.157885 | orchestrator | 2026-01-07 00:41:35 | INFO  | Inventory files merged successfully 2026-01-07 00:41:51.157893 | orchestrator | 2026-01-07 00:41:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-07 00:41:51.157902 | orchestrator | 2026-01-07 00:41:49 | INFO  | Successfully wrote ClusterShell configuration 2026-01-07 00:41:51.157911 | orchestrator | [master db8c871] 2026-01-07-00-41 2026-01-07 00:41:51.157922 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-07 00:41:53.341100 | orchestrator | 2026-01-07 00:41:53 | INFO  | Task aa51cc55-5c7a-434a-b87a-efcc7aa6c602 (ceph-create-lvm-devices) was prepared for execution. 2026-01-07 00:41:53.341208 | orchestrator | 2026-01-07 00:41:53 | INFO  | It takes a moment until task aa51cc55-5c7a-434a-b87a-efcc7aa6c602 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-07 00:42:02.891572 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:42:02.891712 | orchestrator | 2.16.14 2026-01-07 00:42:02.891730 | orchestrator | 2026-01-07 00:42:02.891741 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:42:02.891753 | orchestrator | 2026-01-07 00:42:02.891763 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:42:02.891773 | orchestrator | Wednesday 07 January 2026 00:41:56 +0000 (0:00:00.229) 0:00:00.229 ***** 2026-01-07 00:42:02.891784 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:42:02.891795 | orchestrator | 2026-01-07 00:42:02.891805 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:42:02.891815 | orchestrator | Wednesday 07 January 2026 00:41:57 +0000 (0:00:00.172) 0:00:00.401 ***** 2026-01-07 00:42:02.891825 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:02.891835 | orchestrator | 2026-01-07 00:42:02.891846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.891856 | orchestrator | Wednesday 07 January 2026 00:41:57 +0000 (0:00:00.171) 0:00:00.572 ***** 2026-01-07 00:42:02.891866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:42:02.891875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:42:02.891885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:42:02.891895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:42:02.891905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:42:02.891914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:42:02.891924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:42:02.891933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:42:02.891943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:42:02.891953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:42:02.891962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:42:02.891972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:42:02.892004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:42:02.892014 | orchestrator | 2026-01-07 00:42:02.892024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892033 | orchestrator | Wednesday 07 January 2026 00:41:57 +0000 (0:00:00.350) 0:00:00.923 ***** 2026-01-07 00:42:02.892043 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892053 | orchestrator | 2026-01-07 00:42:02.892063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892072 | orchestrator | Wednesday 07 January 2026 00:41:57 +0000 (0:00:00.131) 0:00:01.055 ***** 2026-01-07 00:42:02.892082 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892092 | orchestrator | 2026-01-07 00:42:02.892101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892111 | orchestrator | Wednesday 07 January 2026 00:41:57 +0000 (0:00:00.135) 0:00:01.190 ***** 2026-01-07 00:42:02.892121 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892131 | orchestrator | 2026-01-07 00:42:02.892140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892150 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.145) 0:00:01.336 ***** 2026-01-07 00:42:02.892160 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892169 | orchestrator | 2026-01-07 00:42:02.892179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892189 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.148) 0:00:01.484 ***** 2026-01-07 00:42:02.892199 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892208 | orchestrator | 2026-01-07 00:42:02.892218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892228 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.166) 0:00:01.650 ***** 2026-01-07 00:42:02.892263 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892281 | orchestrator | 2026-01-07 00:42:02.892298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892314 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.157) 0:00:01.808 ***** 2026-01-07 00:42:02.892331 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892341 | orchestrator | 2026-01-07 00:42:02.892351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892361 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.171) 0:00:01.979 ***** 2026-01-07 00:42:02.892371 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892380 | orchestrator | 2026-01-07 00:42:02.892390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892400 | orchestrator | Wednesday 07 January 2026 00:41:58 +0000 (0:00:00.157) 0:00:02.137 ***** 2026-01-07 00:42:02.892410 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38) 2026-01-07 00:42:02.892421 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38) 2026-01-07 00:42:02.892431 | orchestrator | 2026-01-07 00:42:02.892441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892467 | orchestrator | Wednesday 07 January 2026 00:41:59 +0000 (0:00:00.327) 0:00:02.464 ***** 2026-01-07 00:42:02.892477 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a) 2026-01-07 00:42:02.892487 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a) 2026-01-07 00:42:02.892497 | orchestrator | 2026-01-07 00:42:02.892506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892516 | orchestrator | Wednesday 07 January 2026 00:41:59 +0000 (0:00:00.498) 0:00:02.962 ***** 2026-01-07 00:42:02.892526 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9) 2026-01-07 00:42:02.892544 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9) 2026-01-07 00:42:02.892554 | orchestrator | 2026-01-07 00:42:02.892563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892573 | orchestrator | Wednesday 07 January 2026 00:42:00 +0000 (0:00:00.517) 0:00:03.480 ***** 2026-01-07 00:42:02.892582 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4) 2026-01-07 00:42:02.892592 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4) 2026-01-07 00:42:02.892602 | orchestrator | 2026-01-07 00:42:02.892611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:02.892621 | orchestrator | Wednesday 07 January 2026 00:42:00 +0000 (0:00:00.742) 0:00:04.223 ***** 2026-01-07 00:42:02.892631 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:42:02.892640 | orchestrator | 2026-01-07 00:42:02.892650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892660 | orchestrator | Wednesday 07 January 2026 00:42:01 +0000 (0:00:00.305) 0:00:04.528 ***** 2026-01-07 00:42:02.892670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:42:02.892679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:42:02.892689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:42:02.892716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:42:02.892726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:42:02.892736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:42:02.892745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:42:02.892755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:42:02.892764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:42:02.892774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:42:02.892784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:42:02.892797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:42:02.892807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:42:02.892817 | orchestrator | 2026-01-07 00:42:02.892827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892837 | orchestrator | Wednesday 07 January 2026 00:42:01 +0000 (0:00:00.373) 0:00:04.901 ***** 2026-01-07 00:42:02.892846 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892856 | orchestrator | 2026-01-07 00:42:02.892866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892876 | orchestrator | Wednesday 07 January 2026 00:42:01 +0000 (0:00:00.193) 0:00:05.095 ***** 2026-01-07 00:42:02.892886 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892895 | orchestrator | 2026-01-07 00:42:02.892905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892915 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.180) 0:00:05.275 ***** 2026-01-07 00:42:02.892925 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892934 | orchestrator | 2026-01-07 00:42:02.892944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892954 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.180) 0:00:05.456 ***** 2026-01-07 00:42:02.892969 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.892979 | orchestrator | 2026-01-07 00:42:02.892989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.892998 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.165) 0:00:05.621 ***** 2026-01-07 00:42:02.893008 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.893018 | orchestrator | 2026-01-07 00:42:02.893027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.893037 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.164) 0:00:05.786 ***** 2026-01-07 00:42:02.893047 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.893057 | orchestrator | 2026-01-07 00:42:02.893067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:02.893076 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.160) 0:00:05.947 ***** 2026-01-07 00:42:02.893086 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:02.893096 | orchestrator | 2026-01-07 00:42:02.893110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.161527 | orchestrator | Wednesday 07 January 2026 00:42:02 +0000 (0:00:00.188) 0:00:06.135 ***** 2026-01-07 00:42:10.161641 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.161659 | orchestrator | 2026-01-07 00:42:10.161672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.161683 | orchestrator | Wednesday 07 January 2026 00:42:03 +0000 (0:00:00.172) 0:00:06.308 ***** 2026-01-07 00:42:10.161695 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:42:10.161706 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:42:10.161718 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:42:10.161737 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:42:10.161755 | orchestrator | 2026-01-07 00:42:10.161773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.161791 | orchestrator | Wednesday 07 January 2026 00:42:03 +0000 (0:00:00.920) 0:00:07.228 ***** 2026-01-07 00:42:10.161808 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.161823 | orchestrator | 2026-01-07 00:42:10.161842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.161861 | orchestrator | Wednesday 07 January 2026 00:42:04 +0000 (0:00:00.190) 0:00:07.419 ***** 2026-01-07 00:42:10.161878 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.161895 | orchestrator | 2026-01-07 00:42:10.161912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.161931 | orchestrator | Wednesday 07 January 2026 00:42:04 +0000 (0:00:00.189) 0:00:07.608 ***** 2026-01-07 00:42:10.161951 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.161992 | orchestrator | 2026-01-07 00:42:10.162005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:10.162084 | orchestrator | Wednesday 07 January 2026 00:42:04 +0000 (0:00:00.193) 0:00:07.802 ***** 2026-01-07 00:42:10.162102 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162115 | orchestrator | 2026-01-07 00:42:10.162129 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:42:10.162142 | orchestrator | Wednesday 07 January 2026 00:42:04 +0000 (0:00:00.178) 0:00:07.981 ***** 2026-01-07 00:42:10.162154 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162165 | orchestrator | 2026-01-07 00:42:10.162176 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:42:10.162188 | orchestrator | Wednesday 07 January 2026 00:42:04 +0000 (0:00:00.131) 0:00:08.113 ***** 2026-01-07 00:42:10.162200 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23474997-0e8b-5abe-afd2-a58c42930ca8'}}) 2026-01-07 00:42:10.162212 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18b58870-6028-5d13-8db0-fb505e00be4b'}}) 2026-01-07 00:42:10.162222 | orchestrator | 2026-01-07 00:42:10.162281 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:42:10.162320 | orchestrator | Wednesday 07 January 2026 00:42:05 +0000 (0:00:00.158) 0:00:08.272 ***** 2026-01-07 00:42:10.162333 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'}) 2026-01-07 00:42:10.162346 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'}) 2026-01-07 00:42:10.162357 | orchestrator | 2026-01-07 00:42:10.162368 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:42:10.162379 | orchestrator | Wednesday 07 January 2026 00:42:06 +0000 (0:00:01.898) 0:00:10.170 ***** 2026-01-07 00:42:10.162390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.162403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.162414 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162425 | orchestrator | 2026-01-07 00:42:10.162436 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:42:10.162447 | orchestrator | Wednesday 07 January 2026 00:42:07 +0000 (0:00:00.114) 0:00:10.285 ***** 2026-01-07 00:42:10.162458 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'}) 2026-01-07 00:42:10.162469 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'}) 2026-01-07 00:42:10.162480 | orchestrator | 2026-01-07 00:42:10.162491 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:42:10.162502 | orchestrator | Wednesday 07 January 2026 00:42:08 +0000 (0:00:01.396) 0:00:11.682 ***** 2026-01-07 00:42:10.162513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.162524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.162535 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162545 | orchestrator | 2026-01-07 00:42:10.162556 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:42:10.162567 | orchestrator | Wednesday 07 January 2026 00:42:08 +0000 (0:00:00.124) 0:00:11.806 ***** 2026-01-07 00:42:10.162610 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162629 | orchestrator | 2026-01-07 00:42:10.162647 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:42:10.162665 | orchestrator | Wednesday 07 January 2026 00:42:08 +0000 (0:00:00.094) 0:00:11.900 ***** 2026-01-07 00:42:10.162681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.162697 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.162715 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162733 | orchestrator | 2026-01-07 00:42:10.162749 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:42:10.162767 | orchestrator | Wednesday 07 January 2026 00:42:08 +0000 (0:00:00.265) 0:00:12.165 ***** 2026-01-07 00:42:10.162785 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162802 | orchestrator | 2026-01-07 00:42:10.162819 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:42:10.162836 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.149) 0:00:12.314 ***** 2026-01-07 00:42:10.162896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.162917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.162936 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.162954 | orchestrator | 2026-01-07 00:42:10.162972 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:42:10.162990 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.148) 0:00:12.463 ***** 2026-01-07 00:42:10.163008 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163026 | orchestrator | 2026-01-07 00:42:10.163045 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:42:10.163063 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.142) 0:00:12.606 ***** 2026-01-07 00:42:10.163082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.163102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.163120 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163137 | orchestrator | 2026-01-07 00:42:10.163148 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:42:10.163159 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.143) 0:00:12.749 ***** 2026-01-07 00:42:10.163170 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:10.163181 | orchestrator | 2026-01-07 00:42:10.163192 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:42:10.163223 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.141) 0:00:12.891 ***** 2026-01-07 00:42:10.163264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.163276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.163295 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163310 | orchestrator | 2026-01-07 00:42:10.163337 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:42:10.163356 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.160) 0:00:13.051 ***** 2026-01-07 00:42:10.163373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.163391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.163409 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163427 | orchestrator | 2026-01-07 00:42:10.163446 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:42:10.163463 | orchestrator | Wednesday 07 January 2026 00:42:09 +0000 (0:00:00.134) 0:00:13.185 ***** 2026-01-07 00:42:10.163482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:10.163499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:10.163518 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163535 | orchestrator | 2026-01-07 00:42:10.163554 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:42:10.163586 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.105) 0:00:13.291 ***** 2026-01-07 00:42:10.163605 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:10.163624 | orchestrator | 2026-01-07 00:42:10.163643 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:42:10.163677 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.113) 0:00:13.405 ***** 2026-01-07 00:42:15.768172 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.768374 | orchestrator | 2026-01-07 00:42:15.768393 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:42:15.768406 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.101) 0:00:13.506 ***** 2026-01-07 00:42:15.768418 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.768429 | orchestrator | 2026-01-07 00:42:15.768440 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:42:15.768452 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.116) 0:00:13.623 ***** 2026-01-07 00:42:15.768463 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:42:15.768475 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:42:15.768486 | orchestrator | } 2026-01-07 00:42:15.768498 | orchestrator | 2026-01-07 00:42:15.768509 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:42:15.768520 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.204) 0:00:13.828 ***** 2026-01-07 00:42:15.768531 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:42:15.768542 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:42:15.768553 | orchestrator | } 2026-01-07 00:42:15.768564 | orchestrator | 2026-01-07 00:42:15.768575 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:42:15.768586 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.171) 0:00:13.999 ***** 2026-01-07 00:42:15.768598 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:42:15.768609 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:42:15.768620 | orchestrator | } 2026-01-07 00:42:15.768631 | orchestrator | 2026-01-07 00:42:15.768642 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:42:15.768653 | orchestrator | Wednesday 07 January 2026 00:42:10 +0000 (0:00:00.129) 0:00:14.129 ***** 2026-01-07 00:42:15.768664 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:15.768675 | orchestrator | 2026-01-07 00:42:15.768688 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:42:15.768701 | orchestrator | Wednesday 07 January 2026 00:42:11 +0000 (0:00:00.619) 0:00:14.749 ***** 2026-01-07 00:42:15.768714 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:15.768726 | orchestrator | 2026-01-07 00:42:15.768739 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:42:15.768753 | orchestrator | Wednesday 07 January 2026 00:42:11 +0000 (0:00:00.482) 0:00:15.231 ***** 2026-01-07 00:42:15.768764 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:15.768775 | orchestrator | 2026-01-07 00:42:15.768786 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:42:15.768797 | orchestrator | Wednesday 07 January 2026 00:42:12 +0000 (0:00:00.511) 0:00:15.743 ***** 2026-01-07 00:42:15.768808 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:15.768819 | orchestrator | 2026-01-07 00:42:15.768830 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:42:15.768841 | orchestrator | Wednesday 07 January 2026 00:42:12 +0000 (0:00:00.143) 0:00:15.886 ***** 2026-01-07 00:42:15.768852 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.768863 | orchestrator | 2026-01-07 00:42:15.768874 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:42:15.768885 | orchestrator | Wednesday 07 January 2026 00:42:12 +0000 (0:00:00.096) 0:00:15.982 ***** 2026-01-07 00:42:15.768895 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.768906 | orchestrator | 2026-01-07 00:42:15.768917 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:42:15.768971 | orchestrator | Wednesday 07 January 2026 00:42:12 +0000 (0:00:00.088) 0:00:16.070 ***** 2026-01-07 00:42:15.768983 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:42:15.768994 | orchestrator |  "vgs_report": { 2026-01-07 00:42:15.769005 | orchestrator |  "vg": [] 2026-01-07 00:42:15.769016 | orchestrator |  } 2026-01-07 00:42:15.769027 | orchestrator | } 2026-01-07 00:42:15.769037 | orchestrator | 2026-01-07 00:42:15.769048 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:42:15.769059 | orchestrator | Wednesday 07 January 2026 00:42:12 +0000 (0:00:00.132) 0:00:16.203 ***** 2026-01-07 00:42:15.769070 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769081 | orchestrator | 2026-01-07 00:42:15.769092 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:42:15.769103 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.115) 0:00:16.318 ***** 2026-01-07 00:42:15.769113 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769124 | orchestrator | 2026-01-07 00:42:15.769135 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:42:15.769146 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.127) 0:00:16.446 ***** 2026-01-07 00:42:15.769157 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769167 | orchestrator | 2026-01-07 00:42:15.769178 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:42:15.769189 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.236) 0:00:16.682 ***** 2026-01-07 00:42:15.769200 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769211 | orchestrator | 2026-01-07 00:42:15.769242 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:42:15.769254 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.116) 0:00:16.799 ***** 2026-01-07 00:42:15.769264 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769275 | orchestrator | 2026-01-07 00:42:15.769285 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:42:15.769296 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.121) 0:00:16.920 ***** 2026-01-07 00:42:15.769307 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769318 | orchestrator | 2026-01-07 00:42:15.769328 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:42:15.769339 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.125) 0:00:17.046 ***** 2026-01-07 00:42:15.769350 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769360 | orchestrator | 2026-01-07 00:42:15.769371 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:42:15.769382 | orchestrator | Wednesday 07 January 2026 00:42:13 +0000 (0:00:00.131) 0:00:17.177 ***** 2026-01-07 00:42:15.769413 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769425 | orchestrator | 2026-01-07 00:42:15.769436 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:42:15.769447 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.120) 0:00:17.298 ***** 2026-01-07 00:42:15.769457 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769468 | orchestrator | 2026-01-07 00:42:15.769479 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:42:15.769490 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.115) 0:00:17.413 ***** 2026-01-07 00:42:15.769501 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769511 | orchestrator | 2026-01-07 00:42:15.769522 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:42:15.769533 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.128) 0:00:17.542 ***** 2026-01-07 00:42:15.769544 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769554 | orchestrator | 2026-01-07 00:42:15.769565 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:42:15.769576 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.110) 0:00:17.652 ***** 2026-01-07 00:42:15.769597 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769608 | orchestrator | 2026-01-07 00:42:15.769619 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:42:15.769630 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.113) 0:00:17.766 ***** 2026-01-07 00:42:15.769641 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769652 | orchestrator | 2026-01-07 00:42:15.769662 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:42:15.769673 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.110) 0:00:17.877 ***** 2026-01-07 00:42:15.769684 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769695 | orchestrator | 2026-01-07 00:42:15.769706 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:42:15.769717 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.122) 0:00:17.999 ***** 2026-01-07 00:42:15.769729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:15.769742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:15.769753 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769764 | orchestrator | 2026-01-07 00:42:15.769775 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:42:15.769785 | orchestrator | Wednesday 07 January 2026 00:42:14 +0000 (0:00:00.244) 0:00:18.244 ***** 2026-01-07 00:42:15.769796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:15.769808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:15.769818 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769829 | orchestrator | 2026-01-07 00:42:15.769840 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:42:15.769851 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.140) 0:00:18.384 ***** 2026-01-07 00:42:15.769862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:15.769873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:15.769884 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769895 | orchestrator | 2026-01-07 00:42:15.769906 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:42:15.769917 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.164) 0:00:18.549 ***** 2026-01-07 00:42:15.769928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:15.769938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:15.769949 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.769960 | orchestrator | 2026-01-07 00:42:15.769971 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:42:15.769982 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.145) 0:00:18.695 ***** 2026-01-07 00:42:15.769993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:15.770004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:15.770095 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:15.770110 | orchestrator | 2026-01-07 00:42:15.770121 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:42:15.770141 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.167) 0:00:18.862 ***** 2026-01-07 00:42:15.770161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.398447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.398584 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.398602 | orchestrator | 2026-01-07 00:42:20.398615 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:42:20.398629 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.151) 0:00:19.014 ***** 2026-01-07 00:42:20.398641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.398653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.398665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.398676 | orchestrator | 2026-01-07 00:42:20.398688 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:42:20.398700 | orchestrator | Wednesday 07 January 2026 00:42:15 +0000 (0:00:00.124) 0:00:19.139 ***** 2026-01-07 00:42:20.398712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.398723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.398734 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.398745 | orchestrator | 2026-01-07 00:42:20.398756 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:42:20.398767 | orchestrator | Wednesday 07 January 2026 00:42:16 +0000 (0:00:00.118) 0:00:19.257 ***** 2026-01-07 00:42:20.398779 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:20.398791 | orchestrator | 2026-01-07 00:42:20.398802 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:42:20.398813 | orchestrator | Wednesday 07 January 2026 00:42:16 +0000 (0:00:00.515) 0:00:19.773 ***** 2026-01-07 00:42:20.398824 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:20.398835 | orchestrator | 2026-01-07 00:42:20.398846 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:42:20.398857 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.514) 0:00:20.287 ***** 2026-01-07 00:42:20.398868 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:20.398878 | orchestrator | 2026-01-07 00:42:20.398889 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:42:20.398900 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.134) 0:00:20.421 ***** 2026-01-07 00:42:20.398912 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'vg_name': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'}) 2026-01-07 00:42:20.398944 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'vg_name': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'}) 2026-01-07 00:42:20.398956 | orchestrator | 2026-01-07 00:42:20.398974 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:42:20.398992 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.152) 0:00:20.574 ***** 2026-01-07 00:42:20.399053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.399075 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.399093 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.399111 | orchestrator | 2026-01-07 00:42:20.399163 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:42:20.399177 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.299) 0:00:20.873 ***** 2026-01-07 00:42:20.399188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.399198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.399210 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.399257 | orchestrator | 2026-01-07 00:42:20.399273 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:42:20.399291 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.141) 0:00:21.014 ***** 2026-01-07 00:42:20.399310 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'})  2026-01-07 00:42:20.399331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'})  2026-01-07 00:42:20.399349 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:20.399366 | orchestrator | 2026-01-07 00:42:20.399377 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:42:20.399388 | orchestrator | Wednesday 07 January 2026 00:42:17 +0000 (0:00:00.145) 0:00:21.160 ***** 2026-01-07 00:42:20.399421 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:42:20.399433 | orchestrator |  "lvm_report": { 2026-01-07 00:42:20.399444 | orchestrator |  "lv": [ 2026-01-07 00:42:20.399455 | orchestrator |  { 2026-01-07 00:42:20.399466 | orchestrator |  "lv_name": "osd-block-18b58870-6028-5d13-8db0-fb505e00be4b", 2026-01-07 00:42:20.399478 | orchestrator |  "vg_name": "ceph-18b58870-6028-5d13-8db0-fb505e00be4b" 2026-01-07 00:42:20.399489 | orchestrator |  }, 2026-01-07 00:42:20.399500 | orchestrator |  { 2026-01-07 00:42:20.399511 | orchestrator |  "lv_name": "osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8", 2026-01-07 00:42:20.399522 | orchestrator |  "vg_name": "ceph-23474997-0e8b-5abe-afd2-a58c42930ca8" 2026-01-07 00:42:20.399533 | orchestrator |  } 2026-01-07 00:42:20.399544 | orchestrator |  ], 2026-01-07 00:42:20.399555 | orchestrator |  "pv": [ 2026-01-07 00:42:20.399565 | orchestrator |  { 2026-01-07 00:42:20.399576 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:42:20.399587 | orchestrator |  "vg_name": "ceph-23474997-0e8b-5abe-afd2-a58c42930ca8" 2026-01-07 00:42:20.399598 | orchestrator |  }, 2026-01-07 00:42:20.399609 | orchestrator |  { 2026-01-07 00:42:20.399620 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:42:20.399631 | orchestrator |  "vg_name": "ceph-18b58870-6028-5d13-8db0-fb505e00be4b" 2026-01-07 00:42:20.399642 | orchestrator |  } 2026-01-07 00:42:20.399653 | orchestrator |  ] 2026-01-07 00:42:20.399664 | orchestrator |  } 2026-01-07 00:42:20.399675 | orchestrator | } 2026-01-07 00:42:20.399686 | orchestrator | 2026-01-07 00:42:20.399697 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:42:20.399708 | orchestrator | 2026-01-07 00:42:20.399719 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:42:20.399742 | orchestrator | Wednesday 07 January 2026 00:42:18 +0000 (0:00:00.261) 0:00:21.421 ***** 2026-01-07 00:42:20.399754 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:42:20.399765 | orchestrator | 2026-01-07 00:42:20.399776 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:42:20.399787 | orchestrator | Wednesday 07 January 2026 00:42:18 +0000 (0:00:00.243) 0:00:21.665 ***** 2026-01-07 00:42:20.399798 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:20.399809 | orchestrator | 2026-01-07 00:42:20.399820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.399831 | orchestrator | Wednesday 07 January 2026 00:42:18 +0000 (0:00:00.217) 0:00:21.882 ***** 2026-01-07 00:42:20.399842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:42:20.399853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:42:20.399864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:42:20.399875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:42:20.399885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:42:20.399896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:42:20.399916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:42:20.399927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:42:20.399938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:42:20.399949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:42:20.399960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:42:20.399971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:42:20.399982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:42:20.399993 | orchestrator | 2026-01-07 00:42:20.400004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400015 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.402) 0:00:22.285 ***** 2026-01-07 00:42:20.400026 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400037 | orchestrator | 2026-01-07 00:42:20.400047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400058 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.215) 0:00:22.500 ***** 2026-01-07 00:42:20.400069 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400080 | orchestrator | 2026-01-07 00:42:20.400091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400102 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.185) 0:00:22.686 ***** 2026-01-07 00:42:20.400113 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400124 | orchestrator | 2026-01-07 00:42:20.400135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400146 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.421) 0:00:23.107 ***** 2026-01-07 00:42:20.400157 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400168 | orchestrator | 2026-01-07 00:42:20.400178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400189 | orchestrator | Wednesday 07 January 2026 00:42:20 +0000 (0:00:00.169) 0:00:23.276 ***** 2026-01-07 00:42:20.400200 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400235 | orchestrator | 2026-01-07 00:42:20.400255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:20.400286 | orchestrator | Wednesday 07 January 2026 00:42:20 +0000 (0:00:00.186) 0:00:23.463 ***** 2026-01-07 00:42:20.400304 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:20.400323 | orchestrator | 2026-01-07 00:42:20.400352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.763880 | orchestrator | Wednesday 07 January 2026 00:42:20 +0000 (0:00:00.178) 0:00:23.641 ***** 2026-01-07 00:42:31.764000 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764021 | orchestrator | 2026-01-07 00:42:31.764060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764072 | orchestrator | Wednesday 07 January 2026 00:42:20 +0000 (0:00:00.181) 0:00:23.823 ***** 2026-01-07 00:42:31.764084 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764096 | orchestrator | 2026-01-07 00:42:31.764109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764120 | orchestrator | Wednesday 07 January 2026 00:42:20 +0000 (0:00:00.167) 0:00:23.990 ***** 2026-01-07 00:42:31.764133 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c) 2026-01-07 00:42:31.764146 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c) 2026-01-07 00:42:31.764158 | orchestrator | 2026-01-07 00:42:31.764171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764183 | orchestrator | Wednesday 07 January 2026 00:42:21 +0000 (0:00:00.371) 0:00:24.362 ***** 2026-01-07 00:42:31.764195 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83) 2026-01-07 00:42:31.764252 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83) 2026-01-07 00:42:31.764263 | orchestrator | 2026-01-07 00:42:31.764275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764286 | orchestrator | Wednesday 07 January 2026 00:42:21 +0000 (0:00:00.354) 0:00:24.716 ***** 2026-01-07 00:42:31.764298 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d) 2026-01-07 00:42:31.764309 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d) 2026-01-07 00:42:31.764320 | orchestrator | 2026-01-07 00:42:31.764331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764342 | orchestrator | Wednesday 07 January 2026 00:42:21 +0000 (0:00:00.390) 0:00:25.106 ***** 2026-01-07 00:42:31.764354 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8) 2026-01-07 00:42:31.764366 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8) 2026-01-07 00:42:31.764378 | orchestrator | 2026-01-07 00:42:31.764391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:31.764404 | orchestrator | Wednesday 07 January 2026 00:42:22 +0000 (0:00:00.570) 0:00:25.677 ***** 2026-01-07 00:42:31.764418 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:42:31.764431 | orchestrator | 2026-01-07 00:42:31.764443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764456 | orchestrator | Wednesday 07 January 2026 00:42:22 +0000 (0:00:00.540) 0:00:26.218 ***** 2026-01-07 00:42:31.764469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:42:31.764483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:42:31.764496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:42:31.764509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:42:31.764521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:42:31.764575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:42:31.764585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:42:31.764593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:42:31.764602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:42:31.764611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:42:31.764619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:42:31.764627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:42:31.764636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:42:31.764645 | orchestrator | 2026-01-07 00:42:31.764653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764662 | orchestrator | Wednesday 07 January 2026 00:42:23 +0000 (0:00:00.692) 0:00:26.910 ***** 2026-01-07 00:42:31.764670 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764679 | orchestrator | 2026-01-07 00:42:31.764688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764696 | orchestrator | Wednesday 07 January 2026 00:42:23 +0000 (0:00:00.235) 0:00:27.146 ***** 2026-01-07 00:42:31.764705 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764713 | orchestrator | 2026-01-07 00:42:31.764721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764729 | orchestrator | Wednesday 07 January 2026 00:42:24 +0000 (0:00:00.209) 0:00:27.356 ***** 2026-01-07 00:42:31.764738 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764745 | orchestrator | 2026-01-07 00:42:31.764769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764777 | orchestrator | Wednesday 07 January 2026 00:42:24 +0000 (0:00:00.213) 0:00:27.569 ***** 2026-01-07 00:42:31.764784 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764792 | orchestrator | 2026-01-07 00:42:31.764799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764806 | orchestrator | Wednesday 07 January 2026 00:42:24 +0000 (0:00:00.248) 0:00:27.818 ***** 2026-01-07 00:42:31.764814 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764821 | orchestrator | 2026-01-07 00:42:31.764828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764835 | orchestrator | Wednesday 07 January 2026 00:42:24 +0000 (0:00:00.211) 0:00:28.029 ***** 2026-01-07 00:42:31.764843 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764850 | orchestrator | 2026-01-07 00:42:31.764857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764864 | orchestrator | Wednesday 07 January 2026 00:42:24 +0000 (0:00:00.195) 0:00:28.224 ***** 2026-01-07 00:42:31.764872 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764879 | orchestrator | 2026-01-07 00:42:31.764886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764893 | orchestrator | Wednesday 07 January 2026 00:42:25 +0000 (0:00:00.216) 0:00:28.441 ***** 2026-01-07 00:42:31.764903 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.764915 | orchestrator | 2026-01-07 00:42:31.764927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.764937 | orchestrator | Wednesday 07 January 2026 00:42:25 +0000 (0:00:00.218) 0:00:28.660 ***** 2026-01-07 00:42:31.764949 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:42:31.764961 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:42:31.764975 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:42:31.764987 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:42:31.765008 | orchestrator | 2026-01-07 00:42:31.765021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.765033 | orchestrator | Wednesday 07 January 2026 00:42:26 +0000 (0:00:01.162) 0:00:29.823 ***** 2026-01-07 00:42:31.765045 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765057 | orchestrator | 2026-01-07 00:42:31.765070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.765081 | orchestrator | Wednesday 07 January 2026 00:42:26 +0000 (0:00:00.222) 0:00:30.045 ***** 2026-01-07 00:42:31.765094 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765102 | orchestrator | 2026-01-07 00:42:31.765109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.765116 | orchestrator | Wednesday 07 January 2026 00:42:27 +0000 (0:00:00.738) 0:00:30.784 ***** 2026-01-07 00:42:31.765123 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765130 | orchestrator | 2026-01-07 00:42:31.765137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:31.765145 | orchestrator | Wednesday 07 January 2026 00:42:27 +0000 (0:00:00.193) 0:00:30.977 ***** 2026-01-07 00:42:31.765152 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765159 | orchestrator | 2026-01-07 00:42:31.765166 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:42:31.765180 | orchestrator | Wednesday 07 January 2026 00:42:27 +0000 (0:00:00.266) 0:00:31.243 ***** 2026-01-07 00:42:31.765187 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765194 | orchestrator | 2026-01-07 00:42:31.765222 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:42:31.765230 | orchestrator | Wednesday 07 January 2026 00:42:28 +0000 (0:00:00.169) 0:00:31.413 ***** 2026-01-07 00:42:31.765237 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b296d094-78ce-5ce3-9fe3-598726116dc8'}}) 2026-01-07 00:42:31.765245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73010335-3e9e-51ea-81b3-4dcf5932c07d'}}) 2026-01-07 00:42:31.765252 | orchestrator | 2026-01-07 00:42:31.765259 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:42:31.765266 | orchestrator | Wednesday 07 January 2026 00:42:28 +0000 (0:00:00.202) 0:00:31.616 ***** 2026-01-07 00:42:31.765275 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'}) 2026-01-07 00:42:31.765284 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'}) 2026-01-07 00:42:31.765292 | orchestrator | 2026-01-07 00:42:31.765299 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:42:31.765306 | orchestrator | Wednesday 07 January 2026 00:42:30 +0000 (0:00:01.894) 0:00:33.511 ***** 2026-01-07 00:42:31.765313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:31.765322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:31.765330 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:31.765337 | orchestrator | 2026-01-07 00:42:31.765359 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:42:31.765367 | orchestrator | Wednesday 07 January 2026 00:42:30 +0000 (0:00:00.232) 0:00:33.743 ***** 2026-01-07 00:42:31.765374 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'}) 2026-01-07 00:42:31.765390 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'}) 2026-01-07 00:42:36.924674 | orchestrator | 2026-01-07 00:42:36.924809 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:42:36.924827 | orchestrator | Wednesday 07 January 2026 00:42:31 +0000 (0:00:01.263) 0:00:35.007 ***** 2026-01-07 00:42:36.924840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.924853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.924864 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.924877 | orchestrator | 2026-01-07 00:42:36.924889 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:42:36.924909 | orchestrator | Wednesday 07 January 2026 00:42:31 +0000 (0:00:00.161) 0:00:35.168 ***** 2026-01-07 00:42:36.924937 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.924961 | orchestrator | 2026-01-07 00:42:36.924979 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:42:36.924998 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.116) 0:00:35.285 ***** 2026-01-07 00:42:36.925016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925053 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925071 | orchestrator | 2026-01-07 00:42:36.925092 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:42:36.925105 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.135) 0:00:35.420 ***** 2026-01-07 00:42:36.925116 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925127 | orchestrator | 2026-01-07 00:42:36.925157 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:42:36.925171 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.123) 0:00:35.543 ***** 2026-01-07 00:42:36.925183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925232 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925245 | orchestrator | 2026-01-07 00:42:36.925258 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:42:36.925293 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.250) 0:00:35.793 ***** 2026-01-07 00:42:36.925306 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925319 | orchestrator | 2026-01-07 00:42:36.925331 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:42:36.925344 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.142) 0:00:35.936 ***** 2026-01-07 00:42:36.925356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925393 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925412 | orchestrator | 2026-01-07 00:42:36.925430 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:42:36.925448 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.158) 0:00:36.095 ***** 2026-01-07 00:42:36.925469 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:36.925522 | orchestrator | 2026-01-07 00:42:36.925543 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:42:36.925559 | orchestrator | Wednesday 07 January 2026 00:42:32 +0000 (0:00:00.135) 0:00:36.230 ***** 2026-01-07 00:42:36.925571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925593 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925603 | orchestrator | 2026-01-07 00:42:36.925614 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:42:36.925625 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.158) 0:00:36.388 ***** 2026-01-07 00:42:36.925636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925657 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925668 | orchestrator | 2026-01-07 00:42:36.925679 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:42:36.925711 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.159) 0:00:36.548 ***** 2026-01-07 00:42:36.925723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:36.925734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:36.925745 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925756 | orchestrator | 2026-01-07 00:42:36.925766 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:42:36.925777 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.155) 0:00:36.704 ***** 2026-01-07 00:42:36.925788 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925799 | orchestrator | 2026-01-07 00:42:36.925810 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:42:36.925821 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.115) 0:00:36.820 ***** 2026-01-07 00:42:36.925832 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925843 | orchestrator | 2026-01-07 00:42:36.925853 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:42:36.925864 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.130) 0:00:36.950 ***** 2026-01-07 00:42:36.925875 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.925886 | orchestrator | 2026-01-07 00:42:36.925897 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:42:36.925908 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.144) 0:00:37.095 ***** 2026-01-07 00:42:36.925919 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:42:36.925930 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:42:36.925941 | orchestrator | } 2026-01-07 00:42:36.925952 | orchestrator | 2026-01-07 00:42:36.925963 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:42:36.925989 | orchestrator | Wednesday 07 January 2026 00:42:33 +0000 (0:00:00.149) 0:00:37.244 ***** 2026-01-07 00:42:36.926000 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:42:36.926011 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:42:36.926103 | orchestrator | } 2026-01-07 00:42:36.926122 | orchestrator | 2026-01-07 00:42:36.926141 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:42:36.926159 | orchestrator | Wednesday 07 January 2026 00:42:34 +0000 (0:00:00.134) 0:00:37.379 ***** 2026-01-07 00:42:36.926356 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:42:36.926529 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:42:36.926546 | orchestrator | } 2026-01-07 00:42:36.926556 | orchestrator | 2026-01-07 00:42:36.926565 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:42:36.926576 | orchestrator | Wednesday 07 January 2026 00:42:34 +0000 (0:00:00.266) 0:00:37.646 ***** 2026-01-07 00:42:36.926584 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:36.926593 | orchestrator | 2026-01-07 00:42:36.926602 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:42:36.926610 | orchestrator | Wednesday 07 January 2026 00:42:34 +0000 (0:00:00.518) 0:00:38.164 ***** 2026-01-07 00:42:36.926618 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:36.926626 | orchestrator | 2026-01-07 00:42:36.926634 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:42:36.926642 | orchestrator | Wednesday 07 January 2026 00:42:35 +0000 (0:00:00.524) 0:00:38.688 ***** 2026-01-07 00:42:36.926650 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:36.926658 | orchestrator | 2026-01-07 00:42:36.926666 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:42:36.926674 | orchestrator | Wednesday 07 January 2026 00:42:35 +0000 (0:00:00.508) 0:00:39.197 ***** 2026-01-07 00:42:36.926681 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:36.926689 | orchestrator | 2026-01-07 00:42:36.926697 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:42:36.926705 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.139) 0:00:39.336 ***** 2026-01-07 00:42:36.926713 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926721 | orchestrator | 2026-01-07 00:42:36.926753 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:42:36.926762 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.099) 0:00:39.436 ***** 2026-01-07 00:42:36.926770 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926777 | orchestrator | 2026-01-07 00:42:36.926785 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:42:36.926793 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.108) 0:00:39.544 ***** 2026-01-07 00:42:36.926801 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:42:36.926809 | orchestrator |  "vgs_report": { 2026-01-07 00:42:36.926817 | orchestrator |  "vg": [] 2026-01-07 00:42:36.926825 | orchestrator |  } 2026-01-07 00:42:36.926833 | orchestrator | } 2026-01-07 00:42:36.926841 | orchestrator | 2026-01-07 00:42:36.926849 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:42:36.926857 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.133) 0:00:39.678 ***** 2026-01-07 00:42:36.926865 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926873 | orchestrator | 2026-01-07 00:42:36.926881 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:42:36.926890 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.125) 0:00:39.803 ***** 2026-01-07 00:42:36.926897 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926905 | orchestrator | 2026-01-07 00:42:36.926913 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:42:36.926921 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.122) 0:00:39.925 ***** 2026-01-07 00:42:36.926929 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926937 | orchestrator | 2026-01-07 00:42:36.926945 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:42:36.926953 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.122) 0:00:40.048 ***** 2026-01-07 00:42:36.926961 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:36.926969 | orchestrator | 2026-01-07 00:42:36.927006 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:42:41.384486 | orchestrator | Wednesday 07 January 2026 00:42:36 +0000 (0:00:00.120) 0:00:40.168 ***** 2026-01-07 00:42:41.384623 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384641 | orchestrator | 2026-01-07 00:42:41.384654 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:42:41.384666 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.216) 0:00:40.385 ***** 2026-01-07 00:42:41.384677 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384688 | orchestrator | 2026-01-07 00:42:41.384700 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:42:41.384711 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.116) 0:00:40.501 ***** 2026-01-07 00:42:41.384722 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384733 | orchestrator | 2026-01-07 00:42:41.384744 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:42:41.384755 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.128) 0:00:40.629 ***** 2026-01-07 00:42:41.384766 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384777 | orchestrator | 2026-01-07 00:42:41.384788 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:42:41.384800 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.122) 0:00:40.752 ***** 2026-01-07 00:42:41.384820 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384839 | orchestrator | 2026-01-07 00:42:41.384856 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:42:41.384874 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.125) 0:00:40.877 ***** 2026-01-07 00:42:41.384891 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384909 | orchestrator | 2026-01-07 00:42:41.384928 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:42:41.384948 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.138) 0:00:41.016 ***** 2026-01-07 00:42:41.384966 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.384985 | orchestrator | 2026-01-07 00:42:41.384996 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:42:41.385007 | orchestrator | Wednesday 07 January 2026 00:42:37 +0000 (0:00:00.134) 0:00:41.150 ***** 2026-01-07 00:42:41.385021 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385033 | orchestrator | 2026-01-07 00:42:41.385047 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:42:41.385059 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.164) 0:00:41.315 ***** 2026-01-07 00:42:41.385072 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385084 | orchestrator | 2026-01-07 00:42:41.385097 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:42:41.385109 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.128) 0:00:41.443 ***** 2026-01-07 00:42:41.385122 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385134 | orchestrator | 2026-01-07 00:42:41.385148 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:42:41.385175 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.134) 0:00:41.577 ***** 2026-01-07 00:42:41.385252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385284 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385297 | orchestrator | 2026-01-07 00:42:41.385310 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:42:41.385323 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.168) 0:00:41.745 ***** 2026-01-07 00:42:41.385336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385371 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385381 | orchestrator | 2026-01-07 00:42:41.385392 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:42:41.385403 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.151) 0:00:41.897 ***** 2026-01-07 00:42:41.385414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385436 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385447 | orchestrator | 2026-01-07 00:42:41.385458 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:42:41.385469 | orchestrator | Wednesday 07 January 2026 00:42:38 +0000 (0:00:00.158) 0:00:42.055 ***** 2026-01-07 00:42:41.385480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385502 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385513 | orchestrator | 2026-01-07 00:42:41.385545 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:42:41.385557 | orchestrator | Wednesday 07 January 2026 00:42:39 +0000 (0:00:00.323) 0:00:42.379 ***** 2026-01-07 00:42:41.385568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385590 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385601 | orchestrator | 2026-01-07 00:42:41.385612 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:42:41.385623 | orchestrator | Wednesday 07 January 2026 00:42:39 +0000 (0:00:00.166) 0:00:42.546 ***** 2026-01-07 00:42:41.385635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385657 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385668 | orchestrator | 2026-01-07 00:42:41.385679 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:42:41.385690 | orchestrator | Wednesday 07 January 2026 00:42:39 +0000 (0:00:00.150) 0:00:42.696 ***** 2026-01-07 00:42:41.385701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385723 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385734 | orchestrator | 2026-01-07 00:42:41.385745 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:42:41.385756 | orchestrator | Wednesday 07 January 2026 00:42:39 +0000 (0:00:00.159) 0:00:42.855 ***** 2026-01-07 00:42:41.385775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.385792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.385803 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.385814 | orchestrator | 2026-01-07 00:42:41.385825 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:42:41.385836 | orchestrator | Wednesday 07 January 2026 00:42:39 +0000 (0:00:00.159) 0:00:43.014 ***** 2026-01-07 00:42:41.385847 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:41.385858 | orchestrator | 2026-01-07 00:42:41.385869 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:42:41.385880 | orchestrator | Wednesday 07 January 2026 00:42:40 +0000 (0:00:00.520) 0:00:43.535 ***** 2026-01-07 00:42:41.385891 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:41.385902 | orchestrator | 2026-01-07 00:42:41.385913 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:42:41.385924 | orchestrator | Wednesday 07 January 2026 00:42:40 +0000 (0:00:00.500) 0:00:44.035 ***** 2026-01-07 00:42:41.385935 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:41.385946 | orchestrator | 2026-01-07 00:42:41.385957 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:42:41.385968 | orchestrator | Wednesday 07 January 2026 00:42:40 +0000 (0:00:00.132) 0:00:44.168 ***** 2026-01-07 00:42:41.385986 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'vg_name': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'}) 2026-01-07 00:42:41.386006 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'vg_name': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'}) 2026-01-07 00:42:41.386107 | orchestrator | 2026-01-07 00:42:41.386129 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:42:41.386147 | orchestrator | Wednesday 07 January 2026 00:42:41 +0000 (0:00:00.160) 0:00:44.329 ***** 2026-01-07 00:42:41.386167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.386223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:41.386238 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:41.386248 | orchestrator | 2026-01-07 00:42:41.386259 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:42:41.386271 | orchestrator | Wednesday 07 January 2026 00:42:41 +0000 (0:00:00.147) 0:00:44.476 ***** 2026-01-07 00:42:41.386281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:41.386303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:47.365419 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:47.365551 | orchestrator | 2026-01-07 00:42:47.365573 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:42:47.365587 | orchestrator | Wednesday 07 January 2026 00:42:41 +0000 (0:00:00.153) 0:00:44.630 ***** 2026-01-07 00:42:47.365597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'})  2026-01-07 00:42:47.365609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'})  2026-01-07 00:42:47.365621 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:47.365689 | orchestrator | 2026-01-07 00:42:47.365710 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:42:47.365728 | orchestrator | Wednesday 07 January 2026 00:42:41 +0000 (0:00:00.141) 0:00:44.771 ***** 2026-01-07 00:42:47.365746 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:42:47.365763 | orchestrator |  "lvm_report": { 2026-01-07 00:42:47.365782 | orchestrator |  "lv": [ 2026-01-07 00:42:47.365799 | orchestrator |  { 2026-01-07 00:42:47.365817 | orchestrator |  "lv_name": "osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d", 2026-01-07 00:42:47.365836 | orchestrator |  "vg_name": "ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d" 2026-01-07 00:42:47.365852 | orchestrator |  }, 2026-01-07 00:42:47.365867 | orchestrator |  { 2026-01-07 00:42:47.365878 | orchestrator |  "lv_name": "osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8", 2026-01-07 00:42:47.365890 | orchestrator |  "vg_name": "ceph-b296d094-78ce-5ce3-9fe3-598726116dc8" 2026-01-07 00:42:47.365901 | orchestrator |  } 2026-01-07 00:42:47.365917 | orchestrator |  ], 2026-01-07 00:42:47.365934 | orchestrator |  "pv": [ 2026-01-07 00:42:47.365952 | orchestrator |  { 2026-01-07 00:42:47.365968 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:42:47.365983 | orchestrator |  "vg_name": "ceph-b296d094-78ce-5ce3-9fe3-598726116dc8" 2026-01-07 00:42:47.365992 | orchestrator |  }, 2026-01-07 00:42:47.366006 | orchestrator |  { 2026-01-07 00:42:47.366097 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:42:47.366116 | orchestrator |  "vg_name": "ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d" 2026-01-07 00:42:47.366134 | orchestrator |  } 2026-01-07 00:42:47.366149 | orchestrator |  ] 2026-01-07 00:42:47.366165 | orchestrator |  } 2026-01-07 00:42:47.366175 | orchestrator | } 2026-01-07 00:42:47.366216 | orchestrator | 2026-01-07 00:42:47.366227 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:42:47.366236 | orchestrator | 2026-01-07 00:42:47.366246 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:42:47.366256 | orchestrator | Wednesday 07 January 2026 00:42:41 +0000 (0:00:00.467) 0:00:45.238 ***** 2026-01-07 00:42:47.366266 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:42:47.366275 | orchestrator | 2026-01-07 00:42:47.366285 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:42:47.366295 | orchestrator | Wednesday 07 January 2026 00:42:42 +0000 (0:00:00.241) 0:00:45.479 ***** 2026-01-07 00:42:47.366305 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:42:47.366314 | orchestrator | 2026-01-07 00:42:47.366324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366333 | orchestrator | Wednesday 07 January 2026 00:42:42 +0000 (0:00:00.230) 0:00:45.710 ***** 2026-01-07 00:42:47.366343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:42:47.366353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:42:47.366362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:42:47.366372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:42:47.366381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:42:47.366390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:42:47.366400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:42:47.366409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:42:47.366418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:42:47.366517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:42:47.366535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:42:47.366551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:42:47.366568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:42:47.366584 | orchestrator | 2026-01-07 00:42:47.366608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366626 | orchestrator | Wednesday 07 January 2026 00:42:42 +0000 (0:00:00.399) 0:00:46.109 ***** 2026-01-07 00:42:47.366645 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366663 | orchestrator | 2026-01-07 00:42:47.366680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366695 | orchestrator | Wednesday 07 January 2026 00:42:43 +0000 (0:00:00.224) 0:00:46.333 ***** 2026-01-07 00:42:47.366704 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366714 | orchestrator | 2026-01-07 00:42:47.366724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366754 | orchestrator | Wednesday 07 January 2026 00:42:43 +0000 (0:00:00.248) 0:00:46.582 ***** 2026-01-07 00:42:47.366764 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366774 | orchestrator | 2026-01-07 00:42:47.366783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366793 | orchestrator | Wednesday 07 January 2026 00:42:43 +0000 (0:00:00.192) 0:00:46.774 ***** 2026-01-07 00:42:47.366802 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366812 | orchestrator | 2026-01-07 00:42:47.366821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366874 | orchestrator | Wednesday 07 January 2026 00:42:43 +0000 (0:00:00.198) 0:00:46.973 ***** 2026-01-07 00:42:47.366885 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366895 | orchestrator | 2026-01-07 00:42:47.366904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366914 | orchestrator | Wednesday 07 January 2026 00:42:43 +0000 (0:00:00.184) 0:00:47.158 ***** 2026-01-07 00:42:47.366924 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366933 | orchestrator | 2026-01-07 00:42:47.366943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366952 | orchestrator | Wednesday 07 January 2026 00:42:44 +0000 (0:00:00.644) 0:00:47.802 ***** 2026-01-07 00:42:47.366961 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.366971 | orchestrator | 2026-01-07 00:42:47.366980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.366990 | orchestrator | Wednesday 07 January 2026 00:42:44 +0000 (0:00:00.207) 0:00:48.010 ***** 2026-01-07 00:42:47.366999 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:47.367009 | orchestrator | 2026-01-07 00:42:47.367019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.367028 | orchestrator | Wednesday 07 January 2026 00:42:44 +0000 (0:00:00.184) 0:00:48.195 ***** 2026-01-07 00:42:47.367038 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e) 2026-01-07 00:42:47.367049 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e) 2026-01-07 00:42:47.367058 | orchestrator | 2026-01-07 00:42:47.367068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.367078 | orchestrator | Wednesday 07 January 2026 00:42:45 +0000 (0:00:00.417) 0:00:48.612 ***** 2026-01-07 00:42:47.367087 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb) 2026-01-07 00:42:47.367096 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb) 2026-01-07 00:42:47.367106 | orchestrator | 2026-01-07 00:42:47.367124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.367138 | orchestrator | Wednesday 07 January 2026 00:42:45 +0000 (0:00:00.403) 0:00:49.015 ***** 2026-01-07 00:42:47.367148 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6) 2026-01-07 00:42:47.367158 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6) 2026-01-07 00:42:47.367167 | orchestrator | 2026-01-07 00:42:47.367176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.367215 | orchestrator | Wednesday 07 January 2026 00:42:46 +0000 (0:00:00.427) 0:00:49.442 ***** 2026-01-07 00:42:47.367224 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e) 2026-01-07 00:42:47.367234 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e) 2026-01-07 00:42:47.367243 | orchestrator | 2026-01-07 00:42:47.367253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:42:47.367262 | orchestrator | Wednesday 07 January 2026 00:42:46 +0000 (0:00:00.426) 0:00:49.868 ***** 2026-01-07 00:42:47.367272 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:42:47.367281 | orchestrator | 2026-01-07 00:42:47.367290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:47.367300 | orchestrator | Wednesday 07 January 2026 00:42:46 +0000 (0:00:00.314) 0:00:50.183 ***** 2026-01-07 00:42:47.367309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:42:47.367318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:42:47.367328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:42:47.367337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:42:47.367347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:42:47.367356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:42:47.367365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:42:47.367375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:42:47.367384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:42:47.367394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:42:47.367403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:42:47.367420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:42:55.380524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:42:55.380647 | orchestrator | 2026-01-07 00:42:55.380668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380683 | orchestrator | Wednesday 07 January 2026 00:42:47 +0000 (0:00:00.420) 0:00:50.604 ***** 2026-01-07 00:42:55.380696 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.380711 | orchestrator | 2026-01-07 00:42:55.380724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380738 | orchestrator | Wednesday 07 January 2026 00:42:47 +0000 (0:00:00.202) 0:00:50.806 ***** 2026-01-07 00:42:55.380751 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.380765 | orchestrator | 2026-01-07 00:42:55.380778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380792 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.564) 0:00:51.370 ***** 2026-01-07 00:42:55.380831 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.380845 | orchestrator | 2026-01-07 00:42:55.380858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380871 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.171) 0:00:51.541 ***** 2026-01-07 00:42:55.380885 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.380898 | orchestrator | 2026-01-07 00:42:55.380911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380924 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.159) 0:00:51.701 ***** 2026-01-07 00:42:55.380937 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.380950 | orchestrator | 2026-01-07 00:42:55.380963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.380976 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.174) 0:00:51.875 ***** 2026-01-07 00:42:55.380990 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381003 | orchestrator | 2026-01-07 00:42:55.381016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381029 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.177) 0:00:52.053 ***** 2026-01-07 00:42:55.381042 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381057 | orchestrator | 2026-01-07 00:42:55.381071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381085 | orchestrator | Wednesday 07 January 2026 00:42:48 +0000 (0:00:00.171) 0:00:52.224 ***** 2026-01-07 00:42:55.381099 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381113 | orchestrator | 2026-01-07 00:42:55.381127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381142 | orchestrator | Wednesday 07 January 2026 00:42:49 +0000 (0:00:00.194) 0:00:52.419 ***** 2026-01-07 00:42:55.381224 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:42:55.381241 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:42:55.381255 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:42:55.381270 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:42:55.381284 | orchestrator | 2026-01-07 00:42:55.381298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381313 | orchestrator | Wednesday 07 January 2026 00:42:49 +0000 (0:00:00.571) 0:00:52.990 ***** 2026-01-07 00:42:55.381327 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381342 | orchestrator | 2026-01-07 00:42:55.381355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381370 | orchestrator | Wednesday 07 January 2026 00:42:49 +0000 (0:00:00.166) 0:00:53.157 ***** 2026-01-07 00:42:55.381384 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381398 | orchestrator | 2026-01-07 00:42:55.381412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381426 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.157) 0:00:53.314 ***** 2026-01-07 00:42:55.381440 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381453 | orchestrator | 2026-01-07 00:42:55.381466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:42:55.381479 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.180) 0:00:53.495 ***** 2026-01-07 00:42:55.381493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381506 | orchestrator | 2026-01-07 00:42:55.381518 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:42:55.381531 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.176) 0:00:53.671 ***** 2026-01-07 00:42:55.381543 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381557 | orchestrator | 2026-01-07 00:42:55.381570 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:42:55.381584 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.228) 0:00:53.900 ***** 2026-01-07 00:42:55.381597 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '96f57bfe-16b3-5bb1-823a-e63af6581955'}}) 2026-01-07 00:42:55.381621 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e44d1cae-1e57-574a-aa47-ecf7991dd637'}}) 2026-01-07 00:42:55.381635 | orchestrator | 2026-01-07 00:42:55.381649 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:42:55.381663 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.172) 0:00:54.073 ***** 2026-01-07 00:42:55.381677 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'}) 2026-01-07 00:42:55.381692 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'}) 2026-01-07 00:42:55.381705 | orchestrator | 2026-01-07 00:42:55.381718 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:42:55.381752 | orchestrator | Wednesday 07 January 2026 00:42:52 +0000 (0:00:01.779) 0:00:55.853 ***** 2026-01-07 00:42:55.381766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:42:55.381780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:42:55.381791 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381802 | orchestrator | 2026-01-07 00:42:55.381813 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:42:55.381824 | orchestrator | Wednesday 07 January 2026 00:42:52 +0000 (0:00:00.120) 0:00:55.973 ***** 2026-01-07 00:42:55.381835 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'}) 2026-01-07 00:42:55.381846 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'}) 2026-01-07 00:42:55.381857 | orchestrator | 2026-01-07 00:42:55.381868 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:42:55.381878 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:01.322) 0:00:57.296 ***** 2026-01-07 00:42:55.381888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:42:55.381898 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:42:55.381909 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381919 | orchestrator | 2026-01-07 00:42:55.381929 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:42:55.381941 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.142) 0:00:57.438 ***** 2026-01-07 00:42:55.381952 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.381963 | orchestrator | 2026-01-07 00:42:55.381974 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:42:55.381985 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.128) 0:00:57.567 ***** 2026-01-07 00:42:55.382004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:42:55.382073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:42:55.382088 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.382099 | orchestrator | 2026-01-07 00:42:55.382111 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:42:55.382131 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.134) 0:00:57.701 ***** 2026-01-07 00:42:55.382143 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.382154 | orchestrator | 2026-01-07 00:42:55.382166 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:42:55.382195 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.119) 0:00:57.821 ***** 2026-01-07 00:42:55.382207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:42:55.382217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:42:55.382230 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.382241 | orchestrator | 2026-01-07 00:42:55.382252 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:42:55.382263 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.147) 0:00:57.968 ***** 2026-01-07 00:42:55.382274 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.382285 | orchestrator | 2026-01-07 00:42:55.382296 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:42:55.382307 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:00.136) 0:00:58.105 ***** 2026-01-07 00:42:55.382318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:42:55.382329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:42:55.382340 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:55.382351 | orchestrator | 2026-01-07 00:42:55.382362 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:42:55.382374 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.148) 0:00:58.254 ***** 2026-01-07 00:42:55.382385 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:42:55.382396 | orchestrator | 2026-01-07 00:42:55.382407 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:42:55.382417 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.237) 0:00:58.492 ***** 2026-01-07 00:42:55.382437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:00.945407 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:00.945520 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945538 | orchestrator | 2026-01-07 00:43:00.945551 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:43:00.945564 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.134) 0:00:58.626 ***** 2026-01-07 00:43:00.945576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:00.945588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:00.945599 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945611 | orchestrator | 2026-01-07 00:43:00.945622 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:43:00.945633 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.148) 0:00:58.775 ***** 2026-01-07 00:43:00.945645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:00.945656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:00.945688 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945700 | orchestrator | 2026-01-07 00:43:00.945711 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:43:00.945722 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.140) 0:00:58.915 ***** 2026-01-07 00:43:00.945733 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945744 | orchestrator | 2026-01-07 00:43:00.945754 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:43:00.945766 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.113) 0:00:59.029 ***** 2026-01-07 00:43:00.945777 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945787 | orchestrator | 2026-01-07 00:43:00.945798 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:43:00.945809 | orchestrator | Wednesday 07 January 2026 00:42:55 +0000 (0:00:00.126) 0:00:59.156 ***** 2026-01-07 00:43:00.945820 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.945831 | orchestrator | 2026-01-07 00:43:00.945842 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:43:00.945853 | orchestrator | Wednesday 07 January 2026 00:42:56 +0000 (0:00:00.122) 0:00:59.278 ***** 2026-01-07 00:43:00.945863 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:43:00.945875 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:43:00.945886 | orchestrator | } 2026-01-07 00:43:00.945897 | orchestrator | 2026-01-07 00:43:00.945908 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:43:00.945919 | orchestrator | Wednesday 07 January 2026 00:42:56 +0000 (0:00:00.123) 0:00:59.401 ***** 2026-01-07 00:43:00.945930 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:43:00.945943 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:43:00.945956 | orchestrator | } 2026-01-07 00:43:00.945968 | orchestrator | 2026-01-07 00:43:00.945981 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:43:00.945994 | orchestrator | Wednesday 07 January 2026 00:42:56 +0000 (0:00:00.114) 0:00:59.516 ***** 2026-01-07 00:43:00.946007 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:43:00.946080 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:43:00.946095 | orchestrator | } 2026-01-07 00:43:00.946107 | orchestrator | 2026-01-07 00:43:00.946120 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:43:00.946134 | orchestrator | Wednesday 07 January 2026 00:42:56 +0000 (0:00:00.125) 0:00:59.642 ***** 2026-01-07 00:43:00.946147 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:00.946159 | orchestrator | 2026-01-07 00:43:00.946197 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:43:00.946210 | orchestrator | Wednesday 07 January 2026 00:42:56 +0000 (0:00:00.515) 0:01:00.158 ***** 2026-01-07 00:43:00.946223 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:00.946237 | orchestrator | 2026-01-07 00:43:00.946249 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:43:00.946262 | orchestrator | Wednesday 07 January 2026 00:42:57 +0000 (0:00:00.530) 0:01:00.688 ***** 2026-01-07 00:43:00.946276 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:00.946288 | orchestrator | 2026-01-07 00:43:00.946301 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:43:00.946315 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.623) 0:01:01.311 ***** 2026-01-07 00:43:00.946328 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:00.946341 | orchestrator | 2026-01-07 00:43:00.946351 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:43:00.946363 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.112) 0:01:01.424 ***** 2026-01-07 00:43:00.946374 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946385 | orchestrator | 2026-01-07 00:43:00.946395 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:43:00.946415 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.098) 0:01:01.522 ***** 2026-01-07 00:43:00.946426 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946437 | orchestrator | 2026-01-07 00:43:00.946448 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:43:00.946476 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.095) 0:01:01.618 ***** 2026-01-07 00:43:00.946488 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:43:00.946500 | orchestrator |  "vgs_report": { 2026-01-07 00:43:00.946511 | orchestrator |  "vg": [] 2026-01-07 00:43:00.946555 | orchestrator |  } 2026-01-07 00:43:00.946567 | orchestrator | } 2026-01-07 00:43:00.946578 | orchestrator | 2026-01-07 00:43:00.946589 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:43:00.946601 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.125) 0:01:01.744 ***** 2026-01-07 00:43:00.946612 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946623 | orchestrator | 2026-01-07 00:43:00.946633 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:43:00.946644 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.105) 0:01:01.849 ***** 2026-01-07 00:43:00.946800 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946821 | orchestrator | 2026-01-07 00:43:00.946838 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:43:00.946856 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.115) 0:01:01.965 ***** 2026-01-07 00:43:00.946874 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946893 | orchestrator | 2026-01-07 00:43:00.946905 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:43:00.946916 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.127) 0:01:02.092 ***** 2026-01-07 00:43:00.946927 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946938 | orchestrator | 2026-01-07 00:43:00.946949 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:43:00.946959 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.126) 0:01:02.219 ***** 2026-01-07 00:43:00.946970 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.946981 | orchestrator | 2026-01-07 00:43:00.946992 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:43:00.947002 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.142) 0:01:02.361 ***** 2026-01-07 00:43:00.947013 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947024 | orchestrator | 2026-01-07 00:43:00.947035 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:43:00.947046 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.125) 0:01:02.486 ***** 2026-01-07 00:43:00.947057 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947067 | orchestrator | 2026-01-07 00:43:00.947078 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:43:00.947089 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.131) 0:01:02.618 ***** 2026-01-07 00:43:00.947100 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947111 | orchestrator | 2026-01-07 00:43:00.947122 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:43:00.947133 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.305) 0:01:02.924 ***** 2026-01-07 00:43:00.947143 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947154 | orchestrator | 2026-01-07 00:43:00.947221 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:43:00.947234 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.130) 0:01:03.055 ***** 2026-01-07 00:43:00.947245 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947256 | orchestrator | 2026-01-07 00:43:00.947267 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:43:00.947287 | orchestrator | Wednesday 07 January 2026 00:42:59 +0000 (0:00:00.122) 0:01:03.178 ***** 2026-01-07 00:43:00.947298 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947309 | orchestrator | 2026-01-07 00:43:00.947320 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:43:00.947331 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.118) 0:01:03.296 ***** 2026-01-07 00:43:00.947342 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947353 | orchestrator | 2026-01-07 00:43:00.947363 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:43:00.947374 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.137) 0:01:03.434 ***** 2026-01-07 00:43:00.947385 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947396 | orchestrator | 2026-01-07 00:43:00.947407 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:43:00.947417 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.133) 0:01:03.568 ***** 2026-01-07 00:43:00.947428 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947439 | orchestrator | 2026-01-07 00:43:00.947450 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:43:00.947460 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.162) 0:01:03.730 ***** 2026-01-07 00:43:00.947471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:00.947483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:00.947493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947504 | orchestrator | 2026-01-07 00:43:00.947515 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:43:00.947526 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.143) 0:01:03.873 ***** 2026-01-07 00:43:00.947537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:00.947548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:00.947559 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:00.947569 | orchestrator | 2026-01-07 00:43:00.947580 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:43:00.947591 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.168) 0:01:04.041 ***** 2026-01-07 00:43:00.947612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.778483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.778602 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.778625 | orchestrator | 2026-01-07 00:43:03.778644 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:43:03.778663 | orchestrator | Wednesday 07 January 2026 00:43:00 +0000 (0:00:00.144) 0:01:04.186 ***** 2026-01-07 00:43:03.778680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.778696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.778712 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.778729 | orchestrator | 2026-01-07 00:43:03.778746 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:43:03.778792 | orchestrator | Wednesday 07 January 2026 00:43:01 +0000 (0:00:00.138) 0:01:04.325 ***** 2026-01-07 00:43:03.778809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.778827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.778844 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.778860 | orchestrator | 2026-01-07 00:43:03.778876 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:43:03.778892 | orchestrator | Wednesday 07 January 2026 00:43:01 +0000 (0:00:00.141) 0:01:04.466 ***** 2026-01-07 00:43:03.778908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.778940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.778957 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.778972 | orchestrator | 2026-01-07 00:43:03.778989 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:43:03.779006 | orchestrator | Wednesday 07 January 2026 00:43:01 +0000 (0:00:00.328) 0:01:04.795 ***** 2026-01-07 00:43:03.779022 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.779039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.779062 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.779078 | orchestrator | 2026-01-07 00:43:03.779095 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:43:03.779111 | orchestrator | Wednesday 07 January 2026 00:43:01 +0000 (0:00:00.136) 0:01:04.932 ***** 2026-01-07 00:43:03.779127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.779144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.779187 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.779206 | orchestrator | 2026-01-07 00:43:03.779223 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:43:03.779240 | orchestrator | Wednesday 07 January 2026 00:43:01 +0000 (0:00:00.135) 0:01:05.068 ***** 2026-01-07 00:43:03.779257 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:03.779275 | orchestrator | 2026-01-07 00:43:03.779292 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:43:03.779308 | orchestrator | Wednesday 07 January 2026 00:43:02 +0000 (0:00:00.530) 0:01:05.598 ***** 2026-01-07 00:43:03.779326 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:03.779342 | orchestrator | 2026-01-07 00:43:03.779359 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:43:03.779374 | orchestrator | Wednesday 07 January 2026 00:43:02 +0000 (0:00:00.499) 0:01:06.098 ***** 2026-01-07 00:43:03.779384 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:03.779393 | orchestrator | 2026-01-07 00:43:03.779403 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:43:03.779413 | orchestrator | Wednesday 07 January 2026 00:43:02 +0000 (0:00:00.144) 0:01:06.243 ***** 2026-01-07 00:43:03.779423 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'vg_name': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'}) 2026-01-07 00:43:03.779434 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'vg_name': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'}) 2026-01-07 00:43:03.779454 | orchestrator | 2026-01-07 00:43:03.779464 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:43:03.779474 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.163) 0:01:06.406 ***** 2026-01-07 00:43:03.779503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.779513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.779523 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.779533 | orchestrator | 2026-01-07 00:43:03.779542 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:43:03.779553 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.153) 0:01:06.560 ***** 2026-01-07 00:43:03.779563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.779573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.779583 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.779592 | orchestrator | 2026-01-07 00:43:03.779602 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:43:03.779612 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.148) 0:01:06.709 ***** 2026-01-07 00:43:03.779621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'})  2026-01-07 00:43:03.779631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'})  2026-01-07 00:43:03.779641 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:03.779651 | orchestrator | 2026-01-07 00:43:03.779660 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:43:03.779670 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.145) 0:01:06.855 ***** 2026-01-07 00:43:03.779680 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:43:03.779690 | orchestrator |  "lvm_report": { 2026-01-07 00:43:03.779699 | orchestrator |  "lv": [ 2026-01-07 00:43:03.779709 | orchestrator |  { 2026-01-07 00:43:03.779726 | orchestrator |  "lv_name": "osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955", 2026-01-07 00:43:03.779736 | orchestrator |  "vg_name": "ceph-96f57bfe-16b3-5bb1-823a-e63af6581955" 2026-01-07 00:43:03.779746 | orchestrator |  }, 2026-01-07 00:43:03.779756 | orchestrator |  { 2026-01-07 00:43:03.779766 | orchestrator |  "lv_name": "osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637", 2026-01-07 00:43:03.779775 | orchestrator |  "vg_name": "ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637" 2026-01-07 00:43:03.779785 | orchestrator |  } 2026-01-07 00:43:03.779793 | orchestrator |  ], 2026-01-07 00:43:03.779801 | orchestrator |  "pv": [ 2026-01-07 00:43:03.779809 | orchestrator |  { 2026-01-07 00:43:03.779816 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:43:03.779825 | orchestrator |  "vg_name": "ceph-96f57bfe-16b3-5bb1-823a-e63af6581955" 2026-01-07 00:43:03.779832 | orchestrator |  }, 2026-01-07 00:43:03.779840 | orchestrator |  { 2026-01-07 00:43:03.779848 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:43:03.779856 | orchestrator |  "vg_name": "ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637" 2026-01-07 00:43:03.779864 | orchestrator |  } 2026-01-07 00:43:03.779872 | orchestrator |  ] 2026-01-07 00:43:03.779885 | orchestrator |  } 2026-01-07 00:43:03.779893 | orchestrator | } 2026-01-07 00:43:03.779901 | orchestrator | 2026-01-07 00:43:03.779910 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:43:03.779918 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:43:03.779926 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:43:03.779934 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:43:03.779942 | orchestrator | 2026-01-07 00:43:03.779950 | orchestrator | 2026-01-07 00:43:03.779957 | orchestrator | 2026-01-07 00:43:03.779966 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:43:03.779973 | orchestrator | Wednesday 07 January 2026 00:43:03 +0000 (0:00:00.146) 0:01:07.001 ***** 2026-01-07 00:43:03.779981 | orchestrator | =============================================================================== 2026-01-07 00:43:03.779989 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2026-01-07 00:43:03.779997 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2026-01-07 00:43:03.780005 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.65s 2026-01-07 00:43:03.780013 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2026-01-07 00:43:03.780021 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2026-01-07 00:43:03.780029 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-01-07 00:43:03.780037 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2026-01-07 00:43:03.780045 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2026-01-07 00:43:03.780058 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-01-07 00:43:04.125078 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-01-07 00:43:04.125152 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-07 00:43:04.125159 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-01-07 00:43:04.125183 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-01-07 00:43:04.125192 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-07 00:43:04.125196 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.66s 2026-01-07 00:43:04.125201 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-07 00:43:04.125205 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.63s 2026-01-07 00:43:04.125209 | orchestrator | Get initial list of available block devices ----------------------------- 0.62s 2026-01-07 00:43:04.125213 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.61s 2026-01-07 00:43:04.125217 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.60s 2026-01-07 00:43:16.458133 | orchestrator | 2026-01-07 00:43:16 | INFO  | Task 9959755c-85b7-4f86-a28c-fa83f04c9c28 (facts) was prepared for execution. 2026-01-07 00:43:16.459276 | orchestrator | 2026-01-07 00:43:16 | INFO  | It takes a moment until task 9959755c-85b7-4f86-a28c-fa83f04c9c28 (facts) has been started and output is visible here. 2026-01-07 00:43:28.257502 | orchestrator | 2026-01-07 00:43:28.257650 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:43:28.257667 | orchestrator | 2026-01-07 00:43:28.257679 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:43:28.257690 | orchestrator | Wednesday 07 January 2026 00:43:20 +0000 (0:00:00.232) 0:00:00.232 ***** 2026-01-07 00:43:28.257727 | orchestrator | ok: [testbed-manager] 2026-01-07 00:43:28.257741 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:43:28.257752 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:43:28.257763 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:43:28.257773 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:28.257784 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:28.257795 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:28.257806 | orchestrator | 2026-01-07 00:43:28.257817 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:43:28.257830 | orchestrator | Wednesday 07 January 2026 00:43:21 +0000 (0:00:00.976) 0:00:01.209 ***** 2026-01-07 00:43:28.257841 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:43:28.257853 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:43:28.257864 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:43:28.257875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:43:28.257885 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:28.257896 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:28.257907 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:28.257918 | orchestrator | 2026-01-07 00:43:28.257929 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:28.257940 | orchestrator | 2026-01-07 00:43:28.257951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:43:28.257962 | orchestrator | Wednesday 07 January 2026 00:43:22 +0000 (0:00:01.183) 0:00:02.392 ***** 2026-01-07 00:43:28.257973 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:43:28.257985 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:43:28.257997 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:43:28.258010 | orchestrator | ok: [testbed-manager] 2026-01-07 00:43:28.258085 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:28.258099 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:28.258112 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:28.258124 | orchestrator | 2026-01-07 00:43:28.258137 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:43:28.258149 | orchestrator | 2026-01-07 00:43:28.258226 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:28.258245 | orchestrator | Wednesday 07 January 2026 00:43:27 +0000 (0:00:04.940) 0:00:07.332 ***** 2026-01-07 00:43:28.258262 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:43:28.258274 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:43:28.258287 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:43:28.258298 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:43:28.258308 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:28.258319 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:28.258329 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:28.258340 | orchestrator | 2026-01-07 00:43:28.258350 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:43:28.258362 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258374 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258385 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258396 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258406 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258419 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258452 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:28.258470 | orchestrator | 2026-01-07 00:43:28.258488 | orchestrator | 2026-01-07 00:43:28.258504 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:43:28.258521 | orchestrator | Wednesday 07 January 2026 00:43:28 +0000 (0:00:00.428) 0:00:07.761 ***** 2026-01-07 00:43:28.258540 | orchestrator | =============================================================================== 2026-01-07 00:43:28.258557 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2026-01-07 00:43:28.258577 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2026-01-07 00:43:28.258594 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2026-01-07 00:43:28.258612 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-01-07 00:43:40.240400 | orchestrator | 2026-01-07 00:43:40 | INFO  | Task 0a387705-625c-43ca-b574-ebc5a8b92f29 (frr) was prepared for execution. 2026-01-07 00:43:40.240537 | orchestrator | 2026-01-07 00:43:40 | INFO  | It takes a moment until task 0a387705-625c-43ca-b574-ebc5a8b92f29 (frr) has been started and output is visible here. 2026-01-07 00:44:03.995257 | orchestrator | 2026-01-07 00:44:03.995401 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-07 00:44:03.995419 | orchestrator | 2026-01-07 00:44:03.995430 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-07 00:44:03.995467 | orchestrator | Wednesday 07 January 2026 00:43:44 +0000 (0:00:00.168) 0:00:00.168 ***** 2026-01-07 00:44:03.995479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:44:03.995492 | orchestrator | 2026-01-07 00:44:03.995503 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-07 00:44:03.995514 | orchestrator | Wednesday 07 January 2026 00:43:44 +0000 (0:00:00.209) 0:00:00.377 ***** 2026-01-07 00:44:03.995525 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:03.995537 | orchestrator | 2026-01-07 00:44:03.995548 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-07 00:44:03.995564 | orchestrator | Wednesday 07 January 2026 00:43:45 +0000 (0:00:01.027) 0:00:01.405 ***** 2026-01-07 00:44:03.995575 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:03.995586 | orchestrator | 2026-01-07 00:44:03.995597 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-07 00:44:03.995609 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:08.893) 0:00:10.298 ***** 2026-01-07 00:44:03.995620 | orchestrator | ok: [testbed-manager] 2026-01-07 00:44:03.995632 | orchestrator | 2026-01-07 00:44:03.995643 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-07 00:44:03.995653 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.937) 0:00:11.236 ***** 2026-01-07 00:44:03.995664 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:03.995675 | orchestrator | 2026-01-07 00:44:03.995686 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-07 00:44:03.995696 | orchestrator | Wednesday 07 January 2026 00:43:56 +0000 (0:00:00.856) 0:00:12.092 ***** 2026-01-07 00:44:03.995706 | orchestrator | ok: [testbed-manager] 2026-01-07 00:44:03.995717 | orchestrator | 2026-01-07 00:44:03.995727 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-07 00:44:03.995738 | orchestrator | Wednesday 07 January 2026 00:43:57 +0000 (0:00:01.082) 0:00:13.174 ***** 2026-01-07 00:44:03.995748 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:44:03.995760 | orchestrator | 2026-01-07 00:44:03.995771 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-07 00:44:03.995782 | orchestrator | Wednesday 07 January 2026 00:43:57 +0000 (0:00:00.126) 0:00:13.300 ***** 2026-01-07 00:44:03.995820 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:44:03.995833 | orchestrator | 2026-01-07 00:44:03.995844 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-07 00:44:03.995855 | orchestrator | Wednesday 07 January 2026 00:43:57 +0000 (0:00:00.129) 0:00:13.430 ***** 2026-01-07 00:44:03.995865 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:03.995876 | orchestrator | 2026-01-07 00:44:03.995886 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-07 00:44:03.995897 | orchestrator | Wednesday 07 January 2026 00:43:58 +0000 (0:00:00.886) 0:00:14.316 ***** 2026-01-07 00:44:03.995908 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-07 00:44:03.995918 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-07 00:44:03.995930 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-07 00:44:03.995941 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-07 00:44:03.995951 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-07 00:44:03.995962 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-07 00:44:03.995973 | orchestrator | 2026-01-07 00:44:03.995984 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-07 00:44:03.995995 | orchestrator | Wednesday 07 January 2026 00:44:00 +0000 (0:00:01.946) 0:00:16.263 ***** 2026-01-07 00:44:03.996006 | orchestrator | ok: [testbed-manager] 2026-01-07 00:44:03.996017 | orchestrator | 2026-01-07 00:44:03.996028 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-07 00:44:03.996040 | orchestrator | Wednesday 07 January 2026 00:44:02 +0000 (0:00:02.288) 0:00:18.552 ***** 2026-01-07 00:44:03.996050 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:03.996061 | orchestrator | 2026-01-07 00:44:03.996071 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:03.996083 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:44:03.996094 | orchestrator | 2026-01-07 00:44:03.996104 | orchestrator | 2026-01-07 00:44:03.996115 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:03.996125 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:01.219) 0:00:19.771 ***** 2026-01-07 00:44:03.996155 | orchestrator | =============================================================================== 2026-01-07 00:44:03.996166 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.89s 2026-01-07 00:44:03.996175 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.29s 2026-01-07 00:44:03.996185 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.95s 2026-01-07 00:44:03.996195 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.22s 2026-01-07 00:44:03.996206 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.08s 2026-01-07 00:44:03.996238 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.03s 2026-01-07 00:44:03.996250 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.94s 2026-01-07 00:44:03.996261 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-01-07 00:44:03.996272 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.86s 2026-01-07 00:44:03.996284 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-01-07 00:44:03.996294 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-01-07 00:44:03.996304 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-01-07 00:44:04.254381 | orchestrator | 2026-01-07 00:44:04.256985 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Jan 7 00:44:04 UTC 2026 2026-01-07 00:44:04.257050 | orchestrator | 2026-01-07 00:44:06.129522 | orchestrator | 2026-01-07 00:44:06 | INFO  | Collection nutshell is prepared for execution 2026-01-07 00:44:06.129646 | orchestrator | 2026-01-07 00:44:06 | INFO  | A [0] - dotfiles 2026-01-07 00:44:16.193777 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - homer 2026-01-07 00:44:16.193923 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - netdata 2026-01-07 00:44:16.193940 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - openstackclient 2026-01-07 00:44:16.194259 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - phpmyadmin 2026-01-07 00:44:16.194989 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - common 2026-01-07 00:44:16.199214 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- loadbalancer 2026-01-07 00:44:16.199572 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [2] --- opensearch 2026-01-07 00:44:16.199902 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [2] --- mariadb-ng 2026-01-07 00:44:16.200377 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [3] ---- horizon 2026-01-07 00:44:16.200712 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [3] ---- keystone 2026-01-07 00:44:16.201165 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- neutron 2026-01-07 00:44:16.201595 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ wait-for-nova 2026-01-07 00:44:16.201990 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [6] ------- octavia 2026-01-07 00:44:16.204001 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- barbican 2026-01-07 00:44:16.204117 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- designate 2026-01-07 00:44:16.204338 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- ironic 2026-01-07 00:44:16.204464 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- placement 2026-01-07 00:44:16.204791 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- magnum 2026-01-07 00:44:16.205839 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- openvswitch 2026-01-07 00:44:16.205868 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [2] --- ovn 2026-01-07 00:44:16.206169 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- memcached 2026-01-07 00:44:16.206472 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- redis 2026-01-07 00:44:16.206493 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- rabbitmq-ng 2026-01-07 00:44:16.206843 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - kubernetes 2026-01-07 00:44:16.209341 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- kubeconfig 2026-01-07 00:44:16.209375 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- copy-kubeconfig 2026-01-07 00:44:16.209613 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [0] - ceph 2026-01-07 00:44:16.211701 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [1] -- ceph-pools 2026-01-07 00:44:16.212168 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [2] --- copy-ceph-keys 2026-01-07 00:44:16.212321 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [3] ---- cephclient 2026-01-07 00:44:16.212468 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-07 00:44:16.212492 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- wait-for-keystone 2026-01-07 00:44:16.212511 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-07 00:44:16.212529 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ glance 2026-01-07 00:44:16.212585 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ cinder 2026-01-07 00:44:16.212729 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ nova 2026-01-07 00:44:16.212941 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [4] ----- prometheus 2026-01-07 00:44:16.212973 | orchestrator | 2026-01-07 00:44:16 | INFO  | A [5] ------ grafana 2026-01-07 00:44:16.411640 | orchestrator | 2026-01-07 00:44:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-07 00:44:16.412574 | orchestrator | 2026-01-07 00:44:16 | INFO  | Tasks are running in the background 2026-01-07 00:44:19.471717 | orchestrator | 2026-01-07 00:44:19 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-07 00:44:21.594553 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:21.594738 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:21.595352 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:21.596119 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:21.596543 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:21.597041 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:21.597613 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task 16a6fa1e-b038-4d65-85cf-5c585ab5d2e2 is in state STARTED 2026-01-07 00:44:21.598297 | orchestrator | 2026-01-07 00:44:21 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:21.598507 | orchestrator | 2026-01-07 00:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:24.634500 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:24.634764 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:24.635361 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:24.635960 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:24.636452 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:24.636999 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:24.637807 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task 16a6fa1e-b038-4d65-85cf-5c585ab5d2e2 is in state STARTED 2026-01-07 00:44:24.638288 | orchestrator | 2026-01-07 00:44:24 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:24.638324 | orchestrator | 2026-01-07 00:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:27.670106 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:27.670270 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:27.670579 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:27.671073 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:27.672139 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:27.676351 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:27.676742 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task 16a6fa1e-b038-4d65-85cf-5c585ab5d2e2 is in state STARTED 2026-01-07 00:44:27.677296 | orchestrator | 2026-01-07 00:44:27 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:27.677357 | orchestrator | 2026-01-07 00:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:30.903704 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:30.903802 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:30.903817 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:30.903827 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:30.903837 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:30.903847 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:30.903857 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task 16a6fa1e-b038-4d65-85cf-5c585ab5d2e2 is in state STARTED 2026-01-07 00:44:30.903867 | orchestrator | 2026-01-07 00:44:30 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:30.903877 | orchestrator | 2026-01-07 00:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:34.263052 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:34.263993 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:34.264016 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:34.264422 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:34.266436 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:34.266615 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:34.266840 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task 16a6fa1e-b038-4d65-85cf-5c585ab5d2e2 is in state SUCCESS 2026-01-07 00:44:34.267393 | orchestrator | 2026-01-07 00:44:34 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:34.267416 | orchestrator | 2026-01-07 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:37.361044 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:37.362382 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:37.362746 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:37.363358 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:37.364970 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:37.365019 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:37.365801 | orchestrator | 2026-01-07 00:44:37 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:37.365824 | orchestrator | 2026-01-07 00:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:40.515504 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state STARTED 2026-01-07 00:44:40.516607 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:40.517834 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:40.519742 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:40.519793 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:40.521211 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:40.521624 | orchestrator | 2026-01-07 00:44:40 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:40.521666 | orchestrator | 2026-01-07 00:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:43.594095 | orchestrator | 2026-01-07 00:44:43.594237 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:44:43.594251 | orchestrator | 2026-01-07 00:44:43.594257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:44:43.594264 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.240) 0:00:00.240 ***** 2026-01-07 00:44:43.594271 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:44:43.594279 | orchestrator | 2026-01-07 00:44:43.594286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:44:43.594292 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.092) 0:00:00.333 ***** 2026-01-07 00:44:43.594300 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-07 00:44:43.594307 | orchestrator | 2026-01-07 00:44:43.594311 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-07 00:44:43.594315 | orchestrator | 2026-01-07 00:44:43.594319 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:44:43.594323 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.122) 0:00:00.455 ***** 2026-01-07 00:44:43.594327 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-01-07 00:44:43.594331 | orchestrator | 2026-01-07 00:44:43.594335 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-01-07 00:44:43.594340 | orchestrator | Wednesday 07 January 2026 00:42:19 +0000 (0:00:00.158) 0:00:00.614 ***** 2026-01-07 00:44:43.594344 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-01-07 00:44:43.594348 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-01-07 00:44:43.594352 | orchestrator | 2026-01-07 00:44:43.594356 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:43.594360 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594366 | orchestrator | 2026-01-07 00:44:43.594370 | orchestrator | 2026-01-07 00:44:43.594374 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:43.594378 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:02:11.103) 0:02:11.717 ***** 2026-01-07 00:44:43.594381 | orchestrator | =============================================================================== 2026-01-07 00:44:43.594385 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 131.10s 2026-01-07 00:44:43.594404 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.16s 2026-01-07 00:44:43.594408 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.12s 2026-01-07 00:44:43.594412 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.09s 2026-01-07 00:44:43.594416 | orchestrator | 2026-01-07 00:44:43.594420 | orchestrator | 2026-01-07 00:44:43.594423 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-07 00:44:43.594427 | orchestrator | 2026-01-07 00:44:43.594431 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-07 00:44:43.594435 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.736) 0:00:00.736 ***** 2026-01-07 00:44:43.594439 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:44:43.594443 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:44:43.594447 | orchestrator | changed: [testbed-manager] 2026-01-07 00:44:43.594450 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:44:43.594454 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:44:43.594458 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:44:43.594461 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:44:43.594465 | orchestrator | 2026-01-07 00:44:43.594469 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-07 00:44:43.594473 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:04.549) 0:00:05.285 ***** 2026-01-07 00:44:43.594477 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:44:43.594481 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:44:43.594485 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:44:43.594488 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:44:43.594492 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:44:43.594496 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:44:43.594500 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:44:43.594503 | orchestrator | 2026-01-07 00:44:43.594507 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-07 00:44:43.594511 | orchestrator | Wednesday 07 January 2026 00:44:34 +0000 (0:00:01.964) 0:00:07.249 ***** 2026-01-07 00:44:43.594518 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:33.961170', 'end': '2026-01-07 00:44:33.966176', 'delta': '0:00:00.005006', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594543 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:34.001434', 'end': '2026-01-07 00:44:34.009736', 'delta': '0:00:00.008302', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594762 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:33.995562', 'end': '2026-01-07 00:44:33.999167', 'delta': '0:00:00.003605', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594789 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:33.956335', 'end': '2026-01-07 00:44:33.962840', 'delta': '0:00:00.006505', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594796 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:34.004684', 'end': '2026-01-07 00:44:34.012679', 'delta': '0:00:00.007995', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594803 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:34.237110', 'end': '2026-01-07 00:44:34.246008', 'delta': '0:00:00.008898', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594820 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:44:34.609858', 'end': '2026-01-07 00:44:34.619026', 'delta': '0:00:00.009168', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:44:43.594833 | orchestrator | 2026-01-07 00:44:43.594839 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-07 00:44:43.594846 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:02.938) 0:00:10.187 ***** 2026-01-07 00:44:43.594851 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:44:43.594856 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:44:43.594861 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:44:43.594865 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:44:43.594870 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:44:43.594874 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:44:43.594878 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:44:43.594882 | orchestrator | 2026-01-07 00:44:43.594887 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-07 00:44:43.594891 | orchestrator | Wednesday 07 January 2026 00:44:39 +0000 (0:00:01.349) 0:00:11.536 ***** 2026-01-07 00:44:43.594896 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:44:43.594900 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:44:43.594905 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:44:43.594909 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:44:43.594914 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:44:43.594919 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:44:43.594925 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:44:43.594931 | orchestrator | 2026-01-07 00:44:43.594941 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:43.594949 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594956 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594968 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594975 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594980 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594986 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594992 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:44:43.594998 | orchestrator | 2026-01-07 00:44:43.595004 | orchestrator | 2026-01-07 00:44:43.595010 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:43.595016 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:02.703) 0:00:14.240 ***** 2026-01-07 00:44:43.595023 | orchestrator | =============================================================================== 2026-01-07 00:44:43.595029 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.55s 2026-01-07 00:44:43.595036 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.94s 2026-01-07 00:44:43.595042 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.70s 2026-01-07 00:44:43.595049 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.96s 2026-01-07 00:44:43.595055 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.35s 2026-01-07 00:44:43.595066 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task fd292b6f-3b70-477e-b0ac-b1250f4678ad is in state SUCCESS 2026-01-07 00:44:43.595074 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:43.595077 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:43.595086 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:43.595090 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:43.595093 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:43.595097 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:43.595101 | orchestrator | 2026-01-07 00:44:43 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:43.595105 | orchestrator | 2026-01-07 00:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:46.641881 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:46.641974 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:46.641987 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:46.641997 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:46.642006 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:46.642068 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:46.642086 | orchestrator | 2026-01-07 00:44:46 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:46.642102 | orchestrator | 2026-01-07 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:49.654399 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:49.656006 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:49.656045 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:49.657300 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:49.659541 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:49.661361 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:49.663395 | orchestrator | 2026-01-07 00:44:49 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:49.663431 | orchestrator | 2026-01-07 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:53.050272 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:53.050374 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:53.052514 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:53.057099 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:53.059341 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:53.061680 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:53.063840 | orchestrator | 2026-01-07 00:44:53 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:53.063880 | orchestrator | 2026-01-07 00:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:56.145644 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:56.145751 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:56.145766 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:56.146276 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:56.163273 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:56.170337 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:56.174993 | orchestrator | 2026-01-07 00:44:56 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:56.175148 | orchestrator | 2026-01-07 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:44:59.216225 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:44:59.216336 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:44:59.217436 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:44:59.218803 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:44:59.221824 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:44:59.221894 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:44:59.222462 | orchestrator | 2026-01-07 00:44:59 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:44:59.222514 | orchestrator | 2026-01-07 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:02.258186 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:02.259046 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:02.259552 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:45:02.260273 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:02.261280 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:02.261866 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:02.262539 | orchestrator | 2026-01-07 00:45:02 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:45:02.262595 | orchestrator | 2026-01-07 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:05.322476 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:05.322603 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:05.323500 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:45:05.324326 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:05.325814 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:05.326166 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:05.326890 | orchestrator | 2026-01-07 00:45:05 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state STARTED 2026-01-07 00:45:05.326924 | orchestrator | 2026-01-07 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:08.419422 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:08.419515 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:08.421341 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:45:08.421828 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:08.425459 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:08.425860 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:08.426695 | orchestrator | 2026-01-07 00:45:08 | INFO  | Task 0439521d-fa64-4cab-aee8-c674f94007d3 is in state SUCCESS 2026-01-07 00:45:08.427794 | orchestrator | 2026-01-07 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:11.463448 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:11.464310 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:11.467342 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state STARTED 2026-01-07 00:45:11.467380 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:11.469186 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:11.470077 | orchestrator | 2026-01-07 00:45:11 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:11.470787 | orchestrator | 2026-01-07 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:14.507443 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:14.507524 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:14.507531 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task be974376-47f3-45bb-a067-7050f7a6b3cc is in state SUCCESS 2026-01-07 00:45:14.508494 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:14.511521 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:14.512688 | orchestrator | 2026-01-07 00:45:14 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:14.514076 | orchestrator | 2026-01-07 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:17.543194 | orchestrator | 2026-01-07 00:45:17 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:17.544132 | orchestrator | 2026-01-07 00:45:17 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:17.544781 | orchestrator | 2026-01-07 00:45:17 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:17.545668 | orchestrator | 2026-01-07 00:45:17 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:17.546316 | orchestrator | 2026-01-07 00:45:17 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:17.546346 | orchestrator | 2026-01-07 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:20.641574 | orchestrator | 2026-01-07 00:45:20 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:20.641661 | orchestrator | 2026-01-07 00:45:20 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:20.641667 | orchestrator | 2026-01-07 00:45:20 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:20.642245 | orchestrator | 2026-01-07 00:45:20 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:20.644255 | orchestrator | 2026-01-07 00:45:20 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:20.644307 | orchestrator | 2026-01-07 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:23.724484 | orchestrator | 2026-01-07 00:45:23 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:23.726457 | orchestrator | 2026-01-07 00:45:23 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:23.728401 | orchestrator | 2026-01-07 00:45:23 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:23.730612 | orchestrator | 2026-01-07 00:45:23 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:23.734182 | orchestrator | 2026-01-07 00:45:23 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:23.734227 | orchestrator | 2026-01-07 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:26.778758 | orchestrator | 2026-01-07 00:45:26 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:26.782338 | orchestrator | 2026-01-07 00:45:26 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:26.785484 | orchestrator | 2026-01-07 00:45:26 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:26.786438 | orchestrator | 2026-01-07 00:45:26 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:26.791810 | orchestrator | 2026-01-07 00:45:26 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:26.791862 | orchestrator | 2026-01-07 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:29.841581 | orchestrator | 2026-01-07 00:45:29 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:29.842521 | orchestrator | 2026-01-07 00:45:29 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:29.844217 | orchestrator | 2026-01-07 00:45:29 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:29.846102 | orchestrator | 2026-01-07 00:45:29 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:29.848562 | orchestrator | 2026-01-07 00:45:29 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:29.848708 | orchestrator | 2026-01-07 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:32.880844 | orchestrator | 2026-01-07 00:45:32 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:32.880976 | orchestrator | 2026-01-07 00:45:32 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:32.882359 | orchestrator | 2026-01-07 00:45:32 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:32.882663 | orchestrator | 2026-01-07 00:45:32 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:32.883040 | orchestrator | 2026-01-07 00:45:32 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:32.883153 | orchestrator | 2026-01-07 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:35.922166 | orchestrator | 2026-01-07 00:45:35 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:35.923232 | orchestrator | 2026-01-07 00:45:35 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:35.923900 | orchestrator | 2026-01-07 00:45:35 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:35.924892 | orchestrator | 2026-01-07 00:45:35 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:35.926333 | orchestrator | 2026-01-07 00:45:35 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:35.926354 | orchestrator | 2026-01-07 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:38.964540 | orchestrator | 2026-01-07 00:45:38 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:38.964931 | orchestrator | 2026-01-07 00:45:38 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state STARTED 2026-01-07 00:45:38.965604 | orchestrator | 2026-01-07 00:45:38 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:38.966708 | orchestrator | 2026-01-07 00:45:38 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:38.967259 | orchestrator | 2026-01-07 00:45:38 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:38.967275 | orchestrator | 2026-01-07 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:42.006585 | orchestrator | 2026-01-07 00:45:42.006650 | orchestrator | 2026-01-07 00:45:42.006657 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-07 00:45:42.006661 | orchestrator | 2026-01-07 00:45:42.006666 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-07 00:45:42.006671 | orchestrator | Wednesday 07 January 2026 00:44:29 +0000 (0:00:00.316) 0:00:00.316 ***** 2026-01-07 00:45:42.006675 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:45:42.006681 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-07 00:45:42.006686 | orchestrator | } 2026-01-07 00:45:42.006690 | orchestrator | 2026-01-07 00:45:42.006694 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-07 00:45:42.006758 | orchestrator | Wednesday 07 January 2026 00:44:29 +0000 (0:00:00.438) 0:00:00.754 ***** 2026-01-07 00:45:42.006763 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.006768 | orchestrator | 2026-01-07 00:45:42.006790 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-07 00:45:42.006794 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:01.171) 0:00:01.926 ***** 2026-01-07 00:45:42.006798 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-07 00:45:42.006803 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-07 00:45:42.006806 | orchestrator | 2026-01-07 00:45:42.006810 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-07 00:45:42.006814 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:01.823) 0:00:03.749 ***** 2026-01-07 00:45:42.006818 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.006822 | orchestrator | 2026-01-07 00:45:42.006826 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-07 00:45:42.006830 | orchestrator | Wednesday 07 January 2026 00:44:35 +0000 (0:00:03.072) 0:00:06.822 ***** 2026-01-07 00:45:42.006834 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.006838 | orchestrator | 2026-01-07 00:45:42.006841 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-07 00:45:42.006845 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:01.962) 0:00:08.784 ***** 2026-01-07 00:45:42.006849 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-07 00:45:42.006853 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.006857 | orchestrator | 2026-01-07 00:45:42.006861 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-07 00:45:42.006865 | orchestrator | Wednesday 07 January 2026 00:45:02 +0000 (0:00:25.344) 0:00:34.129 ***** 2026-01-07 00:45:42.006869 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.006872 | orchestrator | 2026-01-07 00:45:42.006876 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:45:42.006880 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.006885 | orchestrator | 2026-01-07 00:45:42.006889 | orchestrator | 2026-01-07 00:45:42.006893 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:45:42.006897 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:01.916) 0:00:36.045 ***** 2026-01-07 00:45:42.006901 | orchestrator | =============================================================================== 2026-01-07 00:45:42.006904 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.34s 2026-01-07 00:45:42.006908 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.07s 2026-01-07 00:45:42.006912 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.96s 2026-01-07 00:45:42.006916 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.92s 2026-01-07 00:45:42.006920 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.82s 2026-01-07 00:45:42.006923 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.17s 2026-01-07 00:45:42.006927 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.44s 2026-01-07 00:45:42.006931 | orchestrator | 2026-01-07 00:45:42.006935 | orchestrator | 2026-01-07 00:45:42.006939 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-07 00:45:42.006942 | orchestrator | 2026-01-07 00:45:42.006946 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-07 00:45:42.006950 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.525) 0:00:00.525 ***** 2026-01-07 00:45:42.006954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-07 00:45:42.006962 | orchestrator | 2026-01-07 00:45:42.006967 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-07 00:45:42.006971 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.609) 0:00:01.135 ***** 2026-01-07 00:45:42.006975 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-07 00:45:42.006979 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-07 00:45:42.006983 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-07 00:45:42.006987 | orchestrator | 2026-01-07 00:45:42.006991 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-07 00:45:42.006994 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:03.694) 0:00:04.829 ***** 2026-01-07 00:45:42.006998 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007002 | orchestrator | 2026-01-07 00:45:42.007006 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-07 00:45:42.007010 | orchestrator | Wednesday 07 January 2026 00:44:35 +0000 (0:00:02.822) 0:00:07.652 ***** 2026-01-07 00:45:42.007022 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-07 00:45:42.007027 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007030 | orchestrator | 2026-01-07 00:45:42.007034 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-07 00:45:42.007038 | orchestrator | Wednesday 07 January 2026 00:45:07 +0000 (0:00:31.668) 0:00:39.320 ***** 2026-01-07 00:45:42.007042 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007046 | orchestrator | 2026-01-07 00:45:42.007049 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-07 00:45:42.007053 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:00.895) 0:00:40.216 ***** 2026-01-07 00:45:42.007057 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007061 | orchestrator | 2026-01-07 00:45:42.007093 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-07 00:45:42.007099 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:00.556) 0:00:40.772 ***** 2026-01-07 00:45:42.007105 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007111 | orchestrator | 2026-01-07 00:45:42.007117 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-07 00:45:42.007123 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:02.004) 0:00:42.777 ***** 2026-01-07 00:45:42.007129 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007136 | orchestrator | 2026-01-07 00:45:42.007143 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-07 00:45:42.007149 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:00.674) 0:00:43.451 ***** 2026-01-07 00:45:42.007155 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007160 | orchestrator | 2026-01-07 00:45:42.007164 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-07 00:45:42.007168 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:00.444) 0:00:43.896 ***** 2026-01-07 00:45:42.007171 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007175 | orchestrator | 2026-01-07 00:45:42.007179 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:45:42.007183 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007187 | orchestrator | 2026-01-07 00:45:42.007190 | orchestrator | 2026-01-07 00:45:42.007194 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:45:42.007198 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.367) 0:00:44.264 ***** 2026-01-07 00:45:42.007201 | orchestrator | =============================================================================== 2026-01-07 00:45:42.007205 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.67s 2026-01-07 00:45:42.007213 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.69s 2026-01-07 00:45:42.007217 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.82s 2026-01-07 00:45:42.007221 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.00s 2026-01-07 00:45:42.007224 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.90s 2026-01-07 00:45:42.007228 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.67s 2026-01-07 00:45:42.007232 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.61s 2026-01-07 00:45:42.007236 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.56s 2026-01-07 00:45:42.007239 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.44s 2026-01-07 00:45:42.007243 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.37s 2026-01-07 00:45:42.007247 | orchestrator | 2026-01-07 00:45:42.007251 | orchestrator | 2026-01-07 00:45:42.007254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:45:42.007258 | orchestrator | 2026-01-07 00:45:42.007262 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:45:42.007308 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:00.190) 0:00:00.190 ***** 2026-01-07 00:45:42.007313 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-07 00:45:42.007318 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-07 00:45:42.007324 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-07 00:45:42.007330 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-07 00:45:42.007340 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-07 00:45:42.007347 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-07 00:45:42.007353 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-07 00:45:42.007359 | orchestrator | 2026-01-07 00:45:42.007369 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-07 00:45:42.007375 | orchestrator | 2026-01-07 00:45:42.007380 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-07 00:45:42.007386 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:01.018) 0:00:01.209 ***** 2026-01-07 00:45:42.007435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:45:42.007446 | orchestrator | 2026-01-07 00:45:42.007453 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-07 00:45:42.007460 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:01.074) 0:00:02.284 ***** 2026-01-07 00:45:42.007464 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007468 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:45:42.007472 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:45:42.007476 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:45:42.007480 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:42.007488 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:42.007492 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:42.007496 | orchestrator | 2026-01-07 00:45:42.007500 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-07 00:45:42.007504 | orchestrator | Wednesday 07 January 2026 00:44:34 +0000 (0:00:02.229) 0:00:04.513 ***** 2026-01-07 00:45:42.007508 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:45:42.007511 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:45:42.007515 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:45:42.007519 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007522 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:42.007526 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:42.007530 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:42.007538 | orchestrator | 2026-01-07 00:45:42.007542 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-07 00:45:42.007546 | orchestrator | Wednesday 07 January 2026 00:44:38 +0000 (0:00:04.139) 0:00:08.653 ***** 2026-01-07 00:45:42.007549 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:45:42.007553 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:45:42.007557 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:45:42.007561 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:45:42.007564 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:45:42.007568 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:45:42.007572 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007575 | orchestrator | 2026-01-07 00:45:42.007579 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-07 00:45:42.007583 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:01.803) 0:00:10.457 ***** 2026-01-07 00:45:42.007587 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:45:42.007590 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:45:42.007594 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:45:42.007598 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:45:42.007602 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:45:42.007605 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:45:42.007609 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007613 | orchestrator | 2026-01-07 00:45:42.007616 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-07 00:45:42.007620 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:10.561) 0:00:21.018 ***** 2026-01-07 00:45:42.007624 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:45:42.007628 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:45:42.007631 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:45:42.007635 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007639 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:45:42.007642 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:45:42.007646 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:45:42.007650 | orchestrator | 2026-01-07 00:45:42.007654 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-07 00:45:42.007658 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:33.147) 0:00:54.166 ***** 2026-01-07 00:45:42.007662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:45:42.007667 | orchestrator | 2026-01-07 00:45:42.007671 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-07 00:45:42.007674 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:01.305) 0:00:55.471 ***** 2026-01-07 00:45:42.007678 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-07 00:45:42.007682 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-07 00:45:42.007686 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-07 00:45:42.007690 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-07 00:45:42.007693 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-07 00:45:42.007697 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-07 00:45:42.007701 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-07 00:45:42.007704 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-07 00:45:42.007708 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-07 00:45:42.007712 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-07 00:45:42.007716 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-07 00:45:42.007719 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-07 00:45:42.007723 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-07 00:45:42.007730 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-07 00:45:42.007733 | orchestrator | 2026-01-07 00:45:42.007737 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-07 00:45:42.007741 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:04.886) 0:01:00.358 ***** 2026-01-07 00:45:42.007745 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007752 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:45:42.007756 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:45:42.007759 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:45:42.007763 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:42.007767 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:42.007771 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:42.007774 | orchestrator | 2026-01-07 00:45:42.007778 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-07 00:45:42.007782 | orchestrator | Wednesday 07 January 2026 00:45:31 +0000 (0:00:01.003) 0:01:01.362 ***** 2026-01-07 00:45:42.007786 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007790 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:45:42.007794 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:45:42.007797 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:45:42.007801 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:45:42.007805 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:45:42.007808 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:45:42.007812 | orchestrator | 2026-01-07 00:45:42.007816 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-07 00:45:42.007823 | orchestrator | Wednesday 07 January 2026 00:45:32 +0000 (0:00:01.179) 0:01:02.542 ***** 2026-01-07 00:45:42.007827 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007831 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:45:42.007834 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:45:42.007838 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:45:42.007842 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:42.007846 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:42.007849 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:42.007853 | orchestrator | 2026-01-07 00:45:42.007857 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-07 00:45:42.007861 | orchestrator | Wednesday 07 January 2026 00:45:33 +0000 (0:00:00.944) 0:01:03.486 ***** 2026-01-07 00:45:42.007864 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:45:42.007868 | orchestrator | ok: [testbed-manager] 2026-01-07 00:45:42.007872 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:45:42.007876 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:45:42.007879 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:42.007883 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:42.007887 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:42.007890 | orchestrator | 2026-01-07 00:45:42.007894 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-07 00:45:42.007898 | orchestrator | Wednesday 07 January 2026 00:45:35 +0000 (0:00:01.600) 0:01:05.087 ***** 2026-01-07 00:45:42.007902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-07 00:45:42.007907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:45:42.007911 | orchestrator | 2026-01-07 00:45:42.007915 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-07 00:45:42.007919 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:01.285) 0:01:06.373 ***** 2026-01-07 00:45:42.007923 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007927 | orchestrator | 2026-01-07 00:45:42.007930 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-07 00:45:42.007934 | orchestrator | Wednesday 07 January 2026 00:45:38 +0000 (0:00:01.809) 0:01:08.182 ***** 2026-01-07 00:45:42.007938 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:45:42.007948 | orchestrator | changed: [testbed-manager] 2026-01-07 00:45:42.007952 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:45:42.007956 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:45:42.007959 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:45:42.007963 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:45:42.007967 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:45:42.007971 | orchestrator | 2026-01-07 00:45:42.007975 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:45:42.007978 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007982 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007986 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007990 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007994 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.007998 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.008002 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:45:42.008005 | orchestrator | 2026-01-07 00:45:42.008009 | orchestrator | 2026-01-07 00:45:42.008013 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:45:42.008017 | orchestrator | Wednesday 07 January 2026 00:45:40 +0000 (0:00:02.598) 0:01:10.781 ***** 2026-01-07 00:45:42.008021 | orchestrator | =============================================================================== 2026-01-07 00:45:42.008024 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 33.15s 2026-01-07 00:45:42.008028 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.56s 2026-01-07 00:45:42.008034 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.89s 2026-01-07 00:45:42.008038 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.14s 2026-01-07 00:45:42.008042 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.60s 2026-01-07 00:45:42.008045 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.23s 2026-01-07 00:45:42.008049 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.81s 2026-01-07 00:45:42.008053 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.80s 2026-01-07 00:45:42.008057 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.60s 2026-01-07 00:45:42.008061 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.30s 2026-01-07 00:45:42.008103 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.29s 2026-01-07 00:45:42.008117 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.18s 2026-01-07 00:45:42.008124 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.07s 2026-01-07 00:45:42.008130 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2026-01-07 00:45:42.008138 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.00s 2026-01-07 00:45:42.008145 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 0.94s 2026-01-07 00:45:42.008151 | orchestrator | 2026-01-07 00:45:42 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:42.008162 | orchestrator | 2026-01-07 00:45:42 | INFO  | Task c4679ae6-43a9-4265-ae10-a1e786ed7a83 is in state SUCCESS 2026-01-07 00:45:42.008167 | orchestrator | 2026-01-07 00:45:42 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:42.008172 | orchestrator | 2026-01-07 00:45:42 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:42.008176 | orchestrator | 2026-01-07 00:45:42 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:42.008181 | orchestrator | 2026-01-07 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:45.052784 | orchestrator | 2026-01-07 00:45:45 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:45.054122 | orchestrator | 2026-01-07 00:45:45 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:45.056030 | orchestrator | 2026-01-07 00:45:45 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:45.057423 | orchestrator | 2026-01-07 00:45:45 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:45.057447 | orchestrator | 2026-01-07 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:48.120870 | orchestrator | 2026-01-07 00:45:48 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:48.123224 | orchestrator | 2026-01-07 00:45:48 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:48.138697 | orchestrator | 2026-01-07 00:45:48 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:48.143574 | orchestrator | 2026-01-07 00:45:48 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:48.144180 | orchestrator | 2026-01-07 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:51.196135 | orchestrator | 2026-01-07 00:45:51 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:51.197550 | orchestrator | 2026-01-07 00:45:51 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:51.199308 | orchestrator | 2026-01-07 00:45:51 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state STARTED 2026-01-07 00:45:51.201180 | orchestrator | 2026-01-07 00:45:51 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:51.201206 | orchestrator | 2026-01-07 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:54.238295 | orchestrator | 2026-01-07 00:45:54 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:54.239222 | orchestrator | 2026-01-07 00:45:54 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:54.240381 | orchestrator | 2026-01-07 00:45:54 | INFO  | Task 94f8bd24-06aa-4b1a-922f-8966b86e8403 is in state SUCCESS 2026-01-07 00:45:54.241950 | orchestrator | 2026-01-07 00:45:54 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:54.242010 | orchestrator | 2026-01-07 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:45:57.275365 | orchestrator | 2026-01-07 00:45:57 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:45:57.276637 | orchestrator | 2026-01-07 00:45:57 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:45:57.278651 | orchestrator | 2026-01-07 00:45:57 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:45:57.278745 | orchestrator | 2026-01-07 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:00.318344 | orchestrator | 2026-01-07 00:46:00 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:00.319194 | orchestrator | 2026-01-07 00:46:00 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:00.320100 | orchestrator | 2026-01-07 00:46:00 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:00.320406 | orchestrator | 2026-01-07 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:03.358671 | orchestrator | 2026-01-07 00:46:03 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:03.359248 | orchestrator | 2026-01-07 00:46:03 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:03.360407 | orchestrator | 2026-01-07 00:46:03 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:03.360457 | orchestrator | 2026-01-07 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:06.399334 | orchestrator | 2026-01-07 00:46:06 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:06.400368 | orchestrator | 2026-01-07 00:46:06 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:06.401656 | orchestrator | 2026-01-07 00:46:06 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:06.401721 | orchestrator | 2026-01-07 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:09.445660 | orchestrator | 2026-01-07 00:46:09 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:09.446286 | orchestrator | 2026-01-07 00:46:09 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:09.448196 | orchestrator | 2026-01-07 00:46:09 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:09.448229 | orchestrator | 2026-01-07 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:12.491646 | orchestrator | 2026-01-07 00:46:12 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:12.492732 | orchestrator | 2026-01-07 00:46:12 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:12.495562 | orchestrator | 2026-01-07 00:46:12 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:12.495597 | orchestrator | 2026-01-07 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:15.531244 | orchestrator | 2026-01-07 00:46:15 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:15.532372 | orchestrator | 2026-01-07 00:46:15 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:15.532912 | orchestrator | 2026-01-07 00:46:15 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:15.533064 | orchestrator | 2026-01-07 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:18.581835 | orchestrator | 2026-01-07 00:46:18 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:18.584816 | orchestrator | 2026-01-07 00:46:18 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:18.584866 | orchestrator | 2026-01-07 00:46:18 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:18.584873 | orchestrator | 2026-01-07 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:21.619702 | orchestrator | 2026-01-07 00:46:21 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:21.620865 | orchestrator | 2026-01-07 00:46:21 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:21.620889 | orchestrator | 2026-01-07 00:46:21 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:21.620894 | orchestrator | 2026-01-07 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:24.653747 | orchestrator | 2026-01-07 00:46:24 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:24.655442 | orchestrator | 2026-01-07 00:46:24 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:24.656925 | orchestrator | 2026-01-07 00:46:24 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:24.656950 | orchestrator | 2026-01-07 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:27.703976 | orchestrator | 2026-01-07 00:46:27 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:27.705769 | orchestrator | 2026-01-07 00:46:27 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:27.707517 | orchestrator | 2026-01-07 00:46:27 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:27.707811 | orchestrator | 2026-01-07 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:30.755146 | orchestrator | 2026-01-07 00:46:30 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:30.770338 | orchestrator | 2026-01-07 00:46:30 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:30.770442 | orchestrator | 2026-01-07 00:46:30 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:30.770457 | orchestrator | 2026-01-07 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:33.798875 | orchestrator | 2026-01-07 00:46:33 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:33.800446 | orchestrator | 2026-01-07 00:46:33 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:33.802274 | orchestrator | 2026-01-07 00:46:33 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:33.802311 | orchestrator | 2026-01-07 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:36.844617 | orchestrator | 2026-01-07 00:46:36 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:36.846189 | orchestrator | 2026-01-07 00:46:36 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:36.847999 | orchestrator | 2026-01-07 00:46:36 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:36.848241 | orchestrator | 2026-01-07 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:39.889533 | orchestrator | 2026-01-07 00:46:39 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:39.889701 | orchestrator | 2026-01-07 00:46:39 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state STARTED 2026-01-07 00:46:39.893430 | orchestrator | 2026-01-07 00:46:39 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:39.893510 | orchestrator | 2026-01-07 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:42.928332 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:42.928861 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:42.931810 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task bb267291-71cd-412a-9084-5ce600cec103 is in state SUCCESS 2026-01-07 00:46:42.934296 | orchestrator | 2026-01-07 00:46:42.934364 | orchestrator | 2026-01-07 00:46:42.934375 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-07 00:46:42.934384 | orchestrator | 2026-01-07 00:46:42.934391 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-07 00:46:42.934399 | orchestrator | Wednesday 07 January 2026 00:44:47 +0000 (0:00:00.278) 0:00:00.278 ***** 2026-01-07 00:46:42.934407 | orchestrator | ok: [testbed-manager] 2026-01-07 00:46:42.934416 | orchestrator | 2026-01-07 00:46:42.934423 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-07 00:46:42.934431 | orchestrator | Wednesday 07 January 2026 00:44:48 +0000 (0:00:00.784) 0:00:01.063 ***** 2026-01-07 00:46:42.934439 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-07 00:46:42.934446 | orchestrator | 2026-01-07 00:46:42.934454 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-07 00:46:42.934461 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.674) 0:00:01.737 ***** 2026-01-07 00:46:42.934468 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.934476 | orchestrator | 2026-01-07 00:46:42.934483 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-07 00:46:42.934490 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.931) 0:00:02.669 ***** 2026-01-07 00:46:42.934498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-07 00:46:42.934524 | orchestrator | ok: [testbed-manager] 2026-01-07 00:46:42.934532 | orchestrator | 2026-01-07 00:46:42.934540 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-07 00:46:42.934547 | orchestrator | Wednesday 07 January 2026 00:45:46 +0000 (0:00:56.297) 0:00:58.967 ***** 2026-01-07 00:46:42.934554 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.934562 | orchestrator | 2026-01-07 00:46:42.934569 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:46:42.934577 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:46:42.934616 | orchestrator | 2026-01-07 00:46:42.934624 | orchestrator | 2026-01-07 00:46:42.934631 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:46:42.934639 | orchestrator | Wednesday 07 January 2026 00:45:53 +0000 (0:00:07.194) 0:01:06.162 ***** 2026-01-07 00:46:42.934646 | orchestrator | =============================================================================== 2026-01-07 00:46:42.934653 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.30s 2026-01-07 00:46:42.934661 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.20s 2026-01-07 00:46:42.934668 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.93s 2026-01-07 00:46:42.934676 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.78s 2026-01-07 00:46:42.934684 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.67s 2026-01-07 00:46:42.934691 | orchestrator | 2026-01-07 00:46:42.934698 | orchestrator | 2026-01-07 00:46:42.934705 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-07 00:46:42.934713 | orchestrator | 2026-01-07 00:46:42.934720 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:46:42.934727 | orchestrator | Wednesday 07 January 2026 00:44:20 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-01-07 00:46:42.934735 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:46:42.934762 | orchestrator | 2026-01-07 00:46:42.934770 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-07 00:46:42.934777 | orchestrator | Wednesday 07 January 2026 00:44:21 +0000 (0:00:01.175) 0:00:01.429 ***** 2026-01-07 00:46:42.934784 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934792 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934799 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934807 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934814 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934822 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934829 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934836 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934843 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934852 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934859 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934867 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934874 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934881 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934889 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934896 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:46:42.934928 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934937 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934945 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:46:42.934952 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934959 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:46:42.934967 | orchestrator | 2026-01-07 00:46:42.934975 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:46:42.934982 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:03.935) 0:00:05.365 ***** 2026-01-07 00:46:42.934989 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:46:42.934999 | orchestrator | 2026-01-07 00:46:42.935069 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-07 00:46:42.935080 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:01.017) 0:00:06.382 ***** 2026-01-07 00:46:42.935099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935175 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.935227 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935426 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.935433 | orchestrator | 2026-01-07 00:46:42.935441 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-07 00:46:42.935449 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:04.310) 0:00:10.692 ***** 2026-01-07 00:46:42.935475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935518 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.935525 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935549 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.935557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935617 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.935625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935648 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.935656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935692 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.935700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935715 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.935723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935738 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.935745 | orchestrator | 2026-01-07 00:46:42.935752 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-07 00:46:42.935760 | orchestrator | Wednesday 07 January 2026 00:44:34 +0000 (0:00:03.370) 0:00:14.063 ***** 2026-01-07 00:46:42.935767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935807 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935851 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.935859 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.935866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935933 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.935941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.935948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.935990 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.935997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.936005 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.936038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.936046 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.936053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.936061 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.936068 | orchestrator | 2026-01-07 00:46:42.936075 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-07 00:46:42.936083 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:03.150) 0:00:17.213 ***** 2026-01-07 00:46:42.936090 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.936097 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.936105 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.936112 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.936119 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.936126 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.936133 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.936140 | orchestrator | 2026-01-07 00:46:42.936148 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-07 00:46:42.936155 | orchestrator | Wednesday 07 January 2026 00:44:38 +0000 (0:00:00.945) 0:00:18.159 ***** 2026-01-07 00:46:42.936163 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.936170 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.936177 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.936184 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.936191 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.936198 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.936205 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.936213 | orchestrator | 2026-01-07 00:46:42.936220 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-07 00:46:42.936236 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:01.814) 0:00:19.973 ***** 2026-01-07 00:46:42.936243 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.936251 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.936258 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.936265 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.936272 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.936279 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.936286 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.936298 | orchestrator | 2026-01-07 00:46:42.936310 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-07 00:46:42.936322 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:01.362) 0:00:21.336 ***** 2026-01-07 00:46:42.936334 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.936345 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.936357 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.936368 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.936379 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.936390 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.936402 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.936414 | orchestrator | 2026-01-07 00:46:42.936427 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-07 00:46:42.936438 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:03.119) 0:00:24.456 ***** 2026-01-07 00:46:42.936458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936472 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.936551 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936625 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.936665 | orchestrator | 2026-01-07 00:46:42.936672 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-07 00:46:42.936679 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:04.876) 0:00:29.332 ***** 2026-01-07 00:46:42.936687 | orchestrator | [WARNING]: Skipped 2026-01-07 00:46:42.936694 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-07 00:46:42.936701 | orchestrator | to this access issue: 2026-01-07 00:46:42.936709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-07 00:46:42.936716 | orchestrator | directory 2026-01-07 00:46:42.936723 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:46:42.936730 | orchestrator | 2026-01-07 00:46:42.936737 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-07 00:46:42.936744 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.769) 0:00:30.102 ***** 2026-01-07 00:46:42.936752 | orchestrator | [WARNING]: Skipped 2026-01-07 00:46:42.936759 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-07 00:46:42.936766 | orchestrator | to this access issue: 2026-01-07 00:46:42.936774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-07 00:46:42.936781 | orchestrator | directory 2026-01-07 00:46:42.936789 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:46:42.936796 | orchestrator | 2026-01-07 00:46:42.936803 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-07 00:46:42.936810 | orchestrator | Wednesday 07 January 2026 00:44:51 +0000 (0:00:00.617) 0:00:30.719 ***** 2026-01-07 00:46:42.936817 | orchestrator | [WARNING]: Skipped 2026-01-07 00:46:42.936824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-07 00:46:42.936831 | orchestrator | to this access issue: 2026-01-07 00:46:42.936839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-07 00:46:42.936846 | orchestrator | directory 2026-01-07 00:46:42.936853 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:46:42.936860 | orchestrator | 2026-01-07 00:46:42.936867 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-07 00:46:42.936875 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:01.229) 0:00:31.948 ***** 2026-01-07 00:46:42.936882 | orchestrator | [WARNING]: Skipped 2026-01-07 00:46:42.936889 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-07 00:46:42.936896 | orchestrator | to this access issue: 2026-01-07 00:46:42.936903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-07 00:46:42.936910 | orchestrator | directory 2026-01-07 00:46:42.936917 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:46:42.936925 | orchestrator | 2026-01-07 00:46:42.936936 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-07 00:46:42.936944 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.738) 0:00:32.687 ***** 2026-01-07 00:46:42.936951 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.936959 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.936971 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.936978 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.936985 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.936993 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.937000 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.937025 | orchestrator | 2026-01-07 00:46:42.937034 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-07 00:46:42.937042 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:03.463) 0:00:36.150 ***** 2026-01-07 00:46:42.937049 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937056 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937063 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937070 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937082 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937089 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937097 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:46:42.937104 | orchestrator | 2026-01-07 00:46:42.937112 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-07 00:46:42.937119 | orchestrator | Wednesday 07 January 2026 00:44:58 +0000 (0:00:02.341) 0:00:38.491 ***** 2026-01-07 00:46:42.937126 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.937134 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.937141 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.937148 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.937155 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.937162 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.937169 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.937177 | orchestrator | 2026-01-07 00:46:42.937184 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-07 00:46:42.937191 | orchestrator | Wednesday 07 January 2026 00:45:01 +0000 (0:00:02.194) 0:00:40.686 ***** 2026-01-07 00:46:42.937199 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937245 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937268 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937283 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937314 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937333 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937357 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937364 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.937384 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937397 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937405 | orchestrator | 2026-01-07 00:46:42.937412 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-07 00:46:42.937420 | orchestrator | Wednesday 07 January 2026 00:45:02 +0000 (0:00:01.801) 0:00:42.487 ***** 2026-01-07 00:46:42.937427 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937458 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937490 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937527 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:46:42.937538 | orchestrator | 2026-01-07 00:46:42.937548 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-07 00:46:42.937559 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:02.813) 0:00:45.301 ***** 2026-01-07 00:46:42.937572 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937609 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937621 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937633 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937646 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:46:42.937655 | orchestrator | 2026-01-07 00:46:42.937662 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-07 00:46:42.937670 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:02.805) 0:00:48.106 ***** 2026-01-07 00:46:42.937685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937694 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937746 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:46:42.937795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937814 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:46:42.937891 | orchestrator | 2026-01-07 00:46:42.937899 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-07 00:46:42.937906 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:03.528) 0:00:51.635 ***** 2026-01-07 00:46:42.937917 | orchestrator | changed: [testbed-manager] => { 2026-01-07 00:46:42.937925 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.937933 | orchestrator | } 2026-01-07 00:46:42.937940 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:46:42.937948 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.937955 | orchestrator | } 2026-01-07 00:46:42.937962 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:46:42.937970 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.937982 | orchestrator | } 2026-01-07 00:46:42.937990 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:46:42.937997 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.938005 | orchestrator | } 2026-01-07 00:46:42.938060 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:46:42.938067 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.938077 | orchestrator | } 2026-01-07 00:46:42.938084 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:46:42.938091 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.938098 | orchestrator | } 2026-01-07 00:46:42.938106 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:46:42.938113 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:46:42.938121 | orchestrator | } 2026-01-07 00:46:42.938128 | orchestrator | 2026-01-07 00:46:42.938136 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:46:42.938143 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.715) 0:00:52.351 ***** 2026-01-07 00:46:42.938151 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938159 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938175 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:42.938183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938253 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:42.938261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938284 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:42.938298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938333 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:42.938341 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:42.938348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938371 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:42.938379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:46:42.938391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:46:42.938411 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:42.938419 | orchestrator | 2026-01-07 00:46:42.938427 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-07 00:46:42.938434 | orchestrator | Wednesday 07 January 2026 00:45:13 +0000 (0:00:01.244) 0:00:53.595 ***** 2026-01-07 00:46:42.938442 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.938449 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.938457 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.938468 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.938475 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.938483 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.938490 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.938497 | orchestrator | 2026-01-07 00:46:42.938505 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-07 00:46:42.938512 | orchestrator | Wednesday 07 January 2026 00:45:15 +0000 (0:00:01.464) 0:00:55.060 ***** 2026-01-07 00:46:42.938520 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.938527 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.938534 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.938542 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.938549 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.938556 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.938563 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.938570 | orchestrator | 2026-01-07 00:46:42.938578 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938585 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:01.103) 0:00:56.163 ***** 2026-01-07 00:46:42.938593 | orchestrator | 2026-01-07 00:46:42.938600 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938608 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.083) 0:00:56.247 ***** 2026-01-07 00:46:42.938615 | orchestrator | 2026-01-07 00:46:42.938622 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938630 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.067) 0:00:56.314 ***** 2026-01-07 00:46:42.938637 | orchestrator | 2026-01-07 00:46:42.938645 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938652 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.188) 0:00:56.502 ***** 2026-01-07 00:46:42.938660 | orchestrator | 2026-01-07 00:46:42.938667 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938674 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.052) 0:00:56.555 ***** 2026-01-07 00:46:42.938682 | orchestrator | 2026-01-07 00:46:42.938689 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938696 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.046) 0:00:56.602 ***** 2026-01-07 00:46:42.938704 | orchestrator | 2026-01-07 00:46:42.938711 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:46:42.938718 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.052) 0:00:56.654 ***** 2026-01-07 00:46:42.938726 | orchestrator | 2026-01-07 00:46:42.938733 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-07 00:46:42.938741 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.068) 0:00:56.723 ***** 2026-01-07 00:46:42.938753 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.938761 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.938768 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.938776 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.938783 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.938790 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.938798 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.938805 | orchestrator | 2026-01-07 00:46:42.938812 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-07 00:46:42.938820 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:30.588) 0:01:27.311 ***** 2026-01-07 00:46:42.938827 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.938834 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.938841 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.938848 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.938856 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.938863 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.938871 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.938878 | orchestrator | 2026-01-07 00:46:42.938885 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-07 00:46:42.938893 | orchestrator | Wednesday 07 January 2026 00:46:28 +0000 (0:00:40.619) 0:02:07.931 ***** 2026-01-07 00:46:42.938900 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:46:42.938907 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:46:42.938915 | orchestrator | ok: [testbed-manager] 2026-01-07 00:46:42.938922 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:46:42.938930 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:42.938937 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:42.938945 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:42.938952 | orchestrator | 2026-01-07 00:46:42.938959 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-07 00:46:42.938968 | orchestrator | Wednesday 07 January 2026 00:46:30 +0000 (0:00:01.930) 0:02:09.861 ***** 2026-01-07 00:46:42.938981 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:46:42.938989 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:46:42.938997 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:46:42.939005 | orchestrator | changed: [testbed-manager] 2026-01-07 00:46:42.939038 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:46:42.939045 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:46:42.939052 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:46:42.939060 | orchestrator | 2026-01-07 00:46:42.939067 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:46:42.939075 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939084 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939091 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939098 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939106 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939113 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939121 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:46:42.939128 | orchestrator | 2026-01-07 00:46:42.939140 | orchestrator | 2026-01-07 00:46:42.939148 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:46:42.939155 | orchestrator | Wednesday 07 January 2026 00:46:40 +0000 (0:00:09.960) 0:02:19.822 ***** 2026-01-07 00:46:42.939162 | orchestrator | =============================================================================== 2026-01-07 00:46:42.939170 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.62s 2026-01-07 00:46:42.939177 | orchestrator | common : Restart fluentd container ------------------------------------- 30.59s 2026-01-07 00:46:42.939184 | orchestrator | common : Restart cron container ----------------------------------------- 9.96s 2026-01-07 00:46:42.939191 | orchestrator | common : Copying over config.json files for services -------------------- 4.88s 2026-01-07 00:46:42.939199 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.31s 2026-01-07 00:46:42.939206 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.94s 2026-01-07 00:46:42.939213 | orchestrator | service-check-containers : common | Check containers -------------------- 3.53s 2026-01-07 00:46:42.939220 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.46s 2026-01-07 00:46:42.939227 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.37s 2026-01-07 00:46:42.939234 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.15s 2026-01-07 00:46:42.939242 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.12s 2026-01-07 00:46:42.939249 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.81s 2026-01-07 00:46:42.939256 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.81s 2026-01-07 00:46:42.939264 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.34s 2026-01-07 00:46:42.939271 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.19s 2026-01-07 00:46:42.939279 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.93s 2026-01-07 00:46:42.939287 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.81s 2026-01-07 00:46:42.939294 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.80s 2026-01-07 00:46:42.939301 | orchestrator | common : Creating log volume -------------------------------------------- 1.46s 2026-01-07 00:46:42.939308 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.36s 2026-01-07 00:46:42.939321 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:42.939329 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:42.939336 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:42.939344 | orchestrator | 2026-01-07 00:46:42 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:42.939351 | orchestrator | 2026-01-07 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:45.971806 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:45.971910 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:45.972386 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:45.972941 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:45.973505 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:45.974059 | orchestrator | 2026-01-07 00:46:45 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:45.974149 | orchestrator | 2026-01-07 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:48.999402 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:48.999508 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:49.000707 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:49.001233 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:49.001950 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:49.002478 | orchestrator | 2026-01-07 00:46:49 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:49.002508 | orchestrator | 2026-01-07 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:52.044245 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:52.044348 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:52.047046 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:52.048809 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:52.052674 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:52.052717 | orchestrator | 2026-01-07 00:46:52 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:52.052723 | orchestrator | 2026-01-07 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:55.112243 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:55.112447 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:55.112795 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:55.113716 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:55.114442 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:55.116183 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:55.118424 | orchestrator | 2026-01-07 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:46:58.177497 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:46:58.177582 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state STARTED 2026-01-07 00:46:58.177592 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:46:58.177599 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:46:58.177606 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:46:58.177612 | orchestrator | 2026-01-07 00:46:58 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:46:58.177646 | orchestrator | 2026-01-07 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:01.243083 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:01.243163 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task f761e2f2-060e-4f9d-af00-25962132c14b is in state SUCCESS 2026-01-07 00:47:01.243463 | orchestrator | 2026-01-07 00:47:01.243477 | orchestrator | 2026-01-07 00:47:01.243482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:47:01.243488 | orchestrator | 2026-01-07 00:47:01.243493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:47:01.243499 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.377) 0:00:00.377 ***** 2026-01-07 00:47:01.243504 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:01.243510 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:01.243514 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:01.243519 | orchestrator | 2026-01-07 00:47:01.243524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:47:01.243529 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.554) 0:00:00.931 ***** 2026-01-07 00:47:01.243534 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-07 00:47:01.243539 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-07 00:47:01.243543 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-07 00:47:01.243548 | orchestrator | 2026-01-07 00:47:01.243552 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-07 00:47:01.243557 | orchestrator | 2026-01-07 00:47:01.243562 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-07 00:47:01.243566 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.788) 0:00:01.720 ***** 2026-01-07 00:47:01.243604 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:47:01.243610 | orchestrator | 2026-01-07 00:47:01.243615 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-07 00:47:01.243620 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.688) 0:00:02.409 ***** 2026-01-07 00:47:01.243625 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:47:01.243630 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:47:01.243634 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:47:01.243639 | orchestrator | 2026-01-07 00:47:01.243643 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-07 00:47:01.243648 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:00.905) 0:00:03.314 ***** 2026-01-07 00:47:01.243653 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:47:01.243657 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:47:01.243662 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:47:01.243666 | orchestrator | 2026-01-07 00:47:01.243671 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-07 00:47:01.243675 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:02.239) 0:00:05.554 ***** 2026-01-07 00:47:01.243683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:47:01.243706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:47:01.243719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:47:01.243724 | orchestrator | 2026-01-07 00:47:01.243729 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-07 00:47:01.243734 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:01.420) 0:00:06.975 ***** 2026-01-07 00:47:01.243739 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:47:01.243743 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:01.243748 | orchestrator | } 2026-01-07 00:47:01.243753 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:47:01.243757 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:01.243762 | orchestrator | } 2026-01-07 00:47:01.243767 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:47:01.243771 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:01.243776 | orchestrator | } 2026-01-07 00:47:01.243780 | orchestrator | 2026-01-07 00:47:01.243785 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:47:01.243790 | orchestrator | Wednesday 07 January 2026 00:46:54 +0000 (0:00:00.690) 0:00:07.665 ***** 2026-01-07 00:47:01.243797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:47:01.243803 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:01.243808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:47:01.243817 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:01.243822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:47:01.243826 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:01.243831 | orchestrator | 2026-01-07 00:47:01.243836 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-07 00:47:01.243840 | orchestrator | Wednesday 07 January 2026 00:46:55 +0000 (0:00:01.227) 0:00:08.892 ***** 2026-01-07 00:47:01.243845 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:01.243849 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:01.243854 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:01.243858 | orchestrator | 2026-01-07 00:47:01.243863 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:01.243869 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:01.243875 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:01.243880 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:01.243885 | orchestrator | 2026-01-07 00:47:01.243889 | orchestrator | 2026-01-07 00:47:01.243894 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:01.243899 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:02.285) 0:00:11.178 ***** 2026-01-07 00:47:01.243906 | orchestrator | =============================================================================== 2026-01-07 00:47:01.243911 | orchestrator | memcached : Restart memcached container --------------------------------- 2.29s 2026-01-07 00:47:01.243916 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.24s 2026-01-07 00:47:01.243920 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.42s 2026-01-07 00:47:01.243925 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.23s 2026-01-07 00:47:01.243930 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.91s 2026-01-07 00:47:01.243934 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-01-07 00:47:01.243939 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.69s 2026-01-07 00:47:01.243943 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.69s 2026-01-07 00:47:01.243948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2026-01-07 00:47:01.243982 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:01.244162 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:01.244885 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:01.245415 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:01.245938 | orchestrator | 2026-01-07 00:47:01 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:01.245968 | orchestrator | 2026-01-07 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:04.285189 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:04.285267 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:04.285273 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:04.285278 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:04.285282 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:04.285286 | orchestrator | 2026-01-07 00:47:04 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:04.285291 | orchestrator | 2026-01-07 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:07.315788 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:07.316761 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:07.318701 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:07.319184 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:07.320058 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:07.321839 | orchestrator | 2026-01-07 00:47:07 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:07.321874 | orchestrator | 2026-01-07 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:10.364469 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:10.364567 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:10.369970 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:10.370144 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:10.370158 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:10.370167 | orchestrator | 2026-01-07 00:47:10 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:10.370177 | orchestrator | 2026-01-07 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:13.394566 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:13.394674 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:13.395476 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:13.397206 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:13.397782 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:13.400412 | orchestrator | 2026-01-07 00:47:13 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:13.400477 | orchestrator | 2026-01-07 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:16.453535 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:16.453640 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:16.455086 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:16.458946 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:16.459029 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:16.459042 | orchestrator | 2026-01-07 00:47:16 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:16.459054 | orchestrator | 2026-01-07 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:19.492396 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:19.492655 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state STARTED 2026-01-07 00:47:19.493840 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:19.494571 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:19.495457 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:19.496175 | orchestrator | 2026-01-07 00:47:19 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:19.496206 | orchestrator | 2026-01-07 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:22.528057 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:22.528404 | orchestrator | 2026-01-07 00:47:22.529100 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task 7c0098a9-640f-4d79-bfb1-dbf707ed7e87 is in state SUCCESS 2026-01-07 00:47:22.529569 | orchestrator | 2026-01-07 00:47:22.529601 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:47:22.529614 | orchestrator | 2026-01-07 00:47:22.529630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:47:22.529643 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.441) 0:00:00.441 ***** 2026-01-07 00:47:22.529654 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:22.529666 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:22.529694 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:22.529719 | orchestrator | 2026-01-07 00:47:22.529738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:47:22.529753 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.604) 0:00:01.045 ***** 2026-01-07 00:47:22.529765 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-07 00:47:22.529777 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-07 00:47:22.529788 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-07 00:47:22.529799 | orchestrator | 2026-01-07 00:47:22.529811 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-07 00:47:22.529859 | orchestrator | 2026-01-07 00:47:22.529873 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-07 00:47:22.529918 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.808) 0:00:01.854 ***** 2026-01-07 00:47:22.529929 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:47:22.529941 | orchestrator | 2026-01-07 00:47:22.529952 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-07 00:47:22.529963 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.781) 0:00:02.636 ***** 2026-01-07 00:47:22.530002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530182 | orchestrator | 2026-01-07 00:47:22.530195 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-07 00:47:22.530208 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:01.370) 0:00:04.006 ***** 2026-01-07 00:47:22.530222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530327 | orchestrator | 2026-01-07 00:47:22.530339 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-07 00:47:22.530353 | orchestrator | Wednesday 07 January 2026 00:46:54 +0000 (0:00:03.193) 0:00:07.200 ***** 2026-01-07 00:47:22.530407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530506 | orchestrator | 2026-01-07 00:47:22.530517 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-07 00:47:22.530528 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:02.690) 0:00:09.891 ***** 2026-01-07 00:47:22.530540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:47:22.530629 | orchestrator | 2026-01-07 00:47:22.530640 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-07 00:47:22.530651 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:01.744) 0:00:11.636 ***** 2026-01-07 00:47:22.530663 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:47:22.530674 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:22.530686 | orchestrator | } 2026-01-07 00:47:22.530697 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:47:22.530708 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:22.530718 | orchestrator | } 2026-01-07 00:47:22.530729 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:47:22.530740 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:22.530751 | orchestrator | } 2026-01-07 00:47:22.530762 | orchestrator | 2026-01-07 00:47:22.530773 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:47:22.530784 | orchestrator | Wednesday 07 January 2026 00:46:59 +0000 (0:00:00.560) 0:00:12.196 ***** 2026-01-07 00:47:22.530795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530818 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:22.530830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530859 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:22.530870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-07 00:47:22.530901 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:22.530912 | orchestrator | 2026-01-07 00:47:22.530923 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:47:22.530934 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.932) 0:00:13.128 ***** 2026-01-07 00:47:22.530945 | orchestrator | 2026-01-07 00:47:22.530956 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:47:22.530986 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.145) 0:00:13.274 ***** 2026-01-07 00:47:22.530998 | orchestrator | 2026-01-07 00:47:22.531016 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:47:22.531028 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.114) 0:00:13.388 ***** 2026-01-07 00:47:22.531039 | orchestrator | 2026-01-07 00:47:22.531050 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-07 00:47:22.531061 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.108) 0:00:13.496 ***** 2026-01-07 00:47:22.531072 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:22.531083 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:22.531094 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:22.531104 | orchestrator | 2026-01-07 00:47:22.531115 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-07 00:47:22.531126 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:08.423) 0:00:21.920 ***** 2026-01-07 00:47:22.531181 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:22.531194 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:22.531205 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:22.531215 | orchestrator | 2026-01-07 00:47:22.531226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:22.531239 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:22.531252 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:22.531263 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:47:22.531274 | orchestrator | 2026-01-07 00:47:22.531284 | orchestrator | 2026-01-07 00:47:22.531295 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:22.531322 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:10.065) 0:00:31.985 ***** 2026-01-07 00:47:22.531333 | orchestrator | =============================================================================== 2026-01-07 00:47:22.531344 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.07s 2026-01-07 00:47:22.531355 | orchestrator | redis : Restart redis container ----------------------------------------- 8.42s 2026-01-07 00:47:22.531373 | orchestrator | redis : Copying over default config.json files -------------------------- 3.19s 2026-01-07 00:47:22.531384 | orchestrator | redis : Copying over redis config files --------------------------------- 2.69s 2026-01-07 00:47:22.531395 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.74s 2026-01-07 00:47:22.531405 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.37s 2026-01-07 00:47:22.531416 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.93s 2026-01-07 00:47:22.531427 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-07 00:47:22.531438 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2026-01-07 00:47:22.531449 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-01-07 00:47:22.531460 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.56s 2026-01-07 00:47:22.531471 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2026-01-07 00:47:22.531482 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:22.531493 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:22.531504 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:22.531621 | orchestrator | 2026-01-07 00:47:22 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:22.531638 | orchestrator | 2026-01-07 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:25.561445 | orchestrator | 2026-01-07 00:47:25 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:25.561615 | orchestrator | 2026-01-07 00:47:25 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:25.562437 | orchestrator | 2026-01-07 00:47:25 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:25.563680 | orchestrator | 2026-01-07 00:47:25 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:25.563692 | orchestrator | 2026-01-07 00:47:25 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:25.563697 | orchestrator | 2026-01-07 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:28.595808 | orchestrator | 2026-01-07 00:47:28 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:28.599015 | orchestrator | 2026-01-07 00:47:28 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:28.600264 | orchestrator | 2026-01-07 00:47:28 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:28.600603 | orchestrator | 2026-01-07 00:47:28 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:28.601635 | orchestrator | 2026-01-07 00:47:28 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:28.601693 | orchestrator | 2026-01-07 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:31.634414 | orchestrator | 2026-01-07 00:47:31 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:31.637172 | orchestrator | 2026-01-07 00:47:31 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:31.637788 | orchestrator | 2026-01-07 00:47:31 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:31.639035 | orchestrator | 2026-01-07 00:47:31 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:31.641398 | orchestrator | 2026-01-07 00:47:31 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:31.641473 | orchestrator | 2026-01-07 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:34.696877 | orchestrator | 2026-01-07 00:47:34 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:34.697212 | orchestrator | 2026-01-07 00:47:34 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:34.697977 | orchestrator | 2026-01-07 00:47:34 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:34.698721 | orchestrator | 2026-01-07 00:47:34 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:34.699272 | orchestrator | 2026-01-07 00:47:34 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:34.699375 | orchestrator | 2026-01-07 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:37.737474 | orchestrator | 2026-01-07 00:47:37 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:37.739559 | orchestrator | 2026-01-07 00:47:37 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:37.741227 | orchestrator | 2026-01-07 00:47:37 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:37.743131 | orchestrator | 2026-01-07 00:47:37 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:37.744861 | orchestrator | 2026-01-07 00:47:37 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:37.745029 | orchestrator | 2026-01-07 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:40.780502 | orchestrator | 2026-01-07 00:47:40 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:40.780583 | orchestrator | 2026-01-07 00:47:40 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:40.782339 | orchestrator | 2026-01-07 00:47:40 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:40.785073 | orchestrator | 2026-01-07 00:47:40 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:40.786103 | orchestrator | 2026-01-07 00:47:40 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:40.786150 | orchestrator | 2026-01-07 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:43.815289 | orchestrator | 2026-01-07 00:47:43 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:43.815458 | orchestrator | 2026-01-07 00:47:43 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:43.816595 | orchestrator | 2026-01-07 00:47:43 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:43.817365 | orchestrator | 2026-01-07 00:47:43 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:43.817625 | orchestrator | 2026-01-07 00:47:43 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:43.817665 | orchestrator | 2026-01-07 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:46.871487 | orchestrator | 2026-01-07 00:47:46 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:46.871573 | orchestrator | 2026-01-07 00:47:46 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:46.871582 | orchestrator | 2026-01-07 00:47:46 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:46.871587 | orchestrator | 2026-01-07 00:47:46 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:46.871592 | orchestrator | 2026-01-07 00:47:46 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:46.871596 | orchestrator | 2026-01-07 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:49.904423 | orchestrator | 2026-01-07 00:47:49 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:49.907591 | orchestrator | 2026-01-07 00:47:49 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:49.909138 | orchestrator | 2026-01-07 00:47:49 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:49.910462 | orchestrator | 2026-01-07 00:47:49 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:49.911964 | orchestrator | 2026-01-07 00:47:49 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:49.912169 | orchestrator | 2026-01-07 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:52.955410 | orchestrator | 2026-01-07 00:47:52 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:52.957178 | orchestrator | 2026-01-07 00:47:52 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state STARTED 2026-01-07 00:47:52.958121 | orchestrator | 2026-01-07 00:47:52 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:52.959639 | orchestrator | 2026-01-07 00:47:52 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:52.961565 | orchestrator | 2026-01-07 00:47:52 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:52.961608 | orchestrator | 2026-01-07 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:55.988077 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:55.988563 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task 452dac5f-2ac1-4480-be02-6dcde7a74fc0 is in state SUCCESS 2026-01-07 00:47:55.989825 | orchestrator | 2026-01-07 00:47:55.989861 | orchestrator | 2026-01-07 00:47:55.989869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:47:55.989877 | orchestrator | 2026-01-07 00:47:55.989884 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:47:55.989891 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.455) 0:00:00.455 ***** 2026-01-07 00:47:55.990002 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:55.990062 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:55.990073 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:55.990079 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:47:55.990085 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:47:55.990092 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:55.990098 | orchestrator | 2026-01-07 00:47:55.990104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:47:55.990113 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:01.114) 0:00:01.569 ***** 2026-01-07 00:47:55.990151 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990160 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990166 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990172 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990179 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990185 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:47:55.990192 | orchestrator | 2026-01-07 00:47:55.990199 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-07 00:47:55.990205 | orchestrator | 2026-01-07 00:47:55.990211 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-07 00:47:55.990218 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:01.104) 0:00:02.673 ***** 2026-01-07 00:47:55.990227 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:47:55.990235 | orchestrator | 2026-01-07 00:47:55.990241 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:47:55.990248 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:01.471) 0:00:04.144 ***** 2026-01-07 00:47:55.990254 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:47:55.990262 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:47:55.990268 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:47:55.990273 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:47:55.990279 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:47:55.990285 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:47:55.990291 | orchestrator | 2026-01-07 00:47:55.990297 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:47:55.990303 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:01.758) 0:00:05.903 ***** 2026-01-07 00:47:55.990309 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:47:55.990316 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:47:55.990321 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:47:55.990328 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:47:55.990333 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:47:55.990339 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:47:55.990346 | orchestrator | 2026-01-07 00:47:55.990352 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:47:55.990358 | orchestrator | Wednesday 07 January 2026 00:46:54 +0000 (0:00:01.968) 0:00:07.871 ***** 2026-01-07 00:47:55.990365 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-07 00:47:55.990373 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:55.990380 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-07 00:47:55.990387 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:55.990393 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-07 00:47:55.990399 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:55.990406 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-07 00:47:55.990412 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.990419 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-07 00:47:55.990425 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.990431 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-07 00:47:55.990437 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.990444 | orchestrator | 2026-01-07 00:47:55.990459 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-07 00:47:55.990465 | orchestrator | Wednesday 07 January 2026 00:46:56 +0000 (0:00:01.312) 0:00:09.184 ***** 2026-01-07 00:47:55.990472 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:55.990478 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:55.990484 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:55.990491 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.990498 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.990504 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.990510 | orchestrator | 2026-01-07 00:47:55.990532 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-07 00:47:55.990539 | orchestrator | Wednesday 07 January 2026 00:46:56 +0000 (0:00:00.653) 0:00:09.837 ***** 2026-01-07 00:47:55.990564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990681 | orchestrator | 2026-01-07 00:47:55.990688 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-07 00:47:55.990695 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:01.604) 0:00:11.441 ***** 2026-01-07 00:47:55.990702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990799 | orchestrator | 2026-01-07 00:47:55.990804 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-07 00:47:55.990809 | orchestrator | Wednesday 07 January 2026 00:47:01 +0000 (0:00:03.588) 0:00:15.029 ***** 2026-01-07 00:47:55.990813 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:55.990817 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:55.990821 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:55.990825 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.990829 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.990832 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.990836 | orchestrator | 2026-01-07 00:47:55.990840 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-07 00:47:55.990844 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:01.385) 0:00:16.415 ***** 2026-01-07 00:47:55.990848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:47:55.990970 | orchestrator | 2026-01-07 00:47:55.990979 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-07 00:47:55.990985 | orchestrator | Wednesday 07 January 2026 00:47:05 +0000 (0:00:02.294) 0:00:18.710 ***** 2026-01-07 00:47:55.990992 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:47:55.991007 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991014 | orchestrator | } 2026-01-07 00:47:55.991020 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:47:55.991026 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991032 | orchestrator | } 2026-01-07 00:47:55.991038 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:47:55.991044 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991051 | orchestrator | } 2026-01-07 00:47:55.991057 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:47:55.991063 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991069 | orchestrator | } 2026-01-07 00:47:55.991074 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:47:55.991080 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991086 | orchestrator | } 2026-01-07 00:47:55.991092 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:47:55.991098 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:47:55.991105 | orchestrator | } 2026-01-07 00:47:55.991111 | orchestrator | 2026-01-07 00:47:55.991117 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:47:55.991123 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.857) 0:00:19.567 ***** 2026-01-07 00:47:55.991130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.991137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.991144 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:47:55.991319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.991338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.991353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.991360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.991366 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:47:55.991372 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:47:55.991378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.991974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.992025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.992035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.992053 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.992060 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.992067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-07 00:47:55.992078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-07 00:47:55.992085 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.992091 | orchestrator | 2026-01-07 00:47:55.992098 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992105 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:01.276) 0:00:20.843 ***** 2026-01-07 00:47:55.992111 | orchestrator | 2026-01-07 00:47:55.992118 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992124 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.315) 0:00:21.159 ***** 2026-01-07 00:47:55.992131 | orchestrator | 2026-01-07 00:47:55.992137 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992144 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.247) 0:00:21.406 ***** 2026-01-07 00:47:55.992150 | orchestrator | 2026-01-07 00:47:55.992192 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992199 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.156) 0:00:21.563 ***** 2026-01-07 00:47:55.992206 | orchestrator | 2026-01-07 00:47:55.992212 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992218 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.295) 0:00:21.858 ***** 2026-01-07 00:47:55.992225 | orchestrator | 2026-01-07 00:47:55.992232 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:47:55.992239 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.105) 0:00:21.964 ***** 2026-01-07 00:47:55.992245 | orchestrator | 2026-01-07 00:47:55.992252 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-07 00:47:55.992260 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.128) 0:00:22.093 ***** 2026-01-07 00:47:55.992267 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:47:55.992281 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:47:55.992293 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:47:55.992299 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:55.992305 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:55.992311 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:55.992318 | orchestrator | 2026-01-07 00:47:55.992325 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-07 00:47:55.992340 | orchestrator | Wednesday 07 January 2026 00:47:18 +0000 (0:00:09.766) 0:00:31.860 ***** 2026-01-07 00:47:55.992346 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:47:55.992354 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:47:55.992360 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:47:55.992366 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:47:55.992372 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:47:55.992378 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:47:55.992384 | orchestrator | 2026-01-07 00:47:55.992390 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:47:55.992396 | orchestrator | Wednesday 07 January 2026 00:47:20 +0000 (0:00:02.058) 0:00:33.918 ***** 2026-01-07 00:47:55.992402 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:55.992409 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:55.992415 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:47:55.992421 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:47:55.992428 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:55.992434 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:47:55.992440 | orchestrator | 2026-01-07 00:47:55.992446 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-07 00:47:55.992452 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:09.813) 0:00:43.731 ***** 2026-01-07 00:47:55.992457 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-07 00:47:55.992465 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-07 00:47:55.992471 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-07 00:47:55.992477 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-07 00:47:55.992483 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-07 00:47:55.992489 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-07 00:47:55.992495 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-07 00:47:55.992501 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-07 00:47:55.992507 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-07 00:47:55.992514 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-07 00:47:55.992520 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-07 00:47:55.992532 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-07 00:47:55.992538 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992544 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992550 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992556 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992569 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992576 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:47:55.992582 | orchestrator | 2026-01-07 00:47:55.992589 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-07 00:47:55.992596 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:08.946) 0:00:52.678 ***** 2026-01-07 00:47:55.992602 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-07 00:47:55.992608 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.992614 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-07 00:47:55.992621 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.992627 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-07 00:47:55.992633 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.992640 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-07 00:47:55.992646 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-07 00:47:55.992657 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-07 00:47:55.992663 | orchestrator | 2026-01-07 00:47:55.992669 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-07 00:47:55.992675 | orchestrator | Wednesday 07 January 2026 00:47:42 +0000 (0:00:02.664) 0:00:55.342 ***** 2026-01-07 00:47:55.992682 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:47:55.992688 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:47:55.992694 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:47:55.992701 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:47:55.992707 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:47:55.992713 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:47:55.992720 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:47:55.992733 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:47:55.992739 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:47:55.992745 | orchestrator | 2026-01-07 00:47:55.992751 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:47:55.992758 | orchestrator | Wednesday 07 January 2026 00:47:46 +0000 (0:00:04.031) 0:00:59.374 ***** 2026-01-07 00:47:55.992764 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:47:55.992770 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:47:55.992776 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:47:55.992781 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:47:55.992787 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:47:55.992793 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:47:55.992798 | orchestrator | 2026-01-07 00:47:55.992804 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:55.992828 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:47:55.992837 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:47:55.992843 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:47:55.992849 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:47:55.992856 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:47:55.992870 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:47:55.992877 | orchestrator | 2026-01-07 00:47:55.992883 | orchestrator | 2026-01-07 00:47:55.992891 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:55.992897 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:07.782) 0:01:07.156 ***** 2026-01-07 00:47:55.992905 | orchestrator | =============================================================================== 2026-01-07 00:47:55.992911 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.60s 2026-01-07 00:47:55.992917 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.77s 2026-01-07 00:47:55.992922 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.95s 2026-01-07 00:47:55.992960 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.03s 2026-01-07 00:47:55.992967 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.59s 2026-01-07 00:47:55.992973 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.66s 2026-01-07 00:47:55.992980 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.29s 2026-01-07 00:47:55.992985 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.06s 2026-01-07 00:47:55.992992 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.97s 2026-01-07 00:47:55.992997 | orchestrator | module-load : Load modules ---------------------------------------------- 1.76s 2026-01-07 00:47:55.993003 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.60s 2026-01-07 00:47:55.993010 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.47s 2026-01-07 00:47:55.993015 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.39s 2026-01-07 00:47:55.993022 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.31s 2026-01-07 00:47:55.993028 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.28s 2026-01-07 00:47:55.993034 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.25s 2026-01-07 00:47:55.993041 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2026-01-07 00:47:55.993047 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-01-07 00:47:55.993053 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.86s 2026-01-07 00:47:55.993060 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.65s 2026-01-07 00:47:55.993066 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:55.993184 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:55.993198 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:47:55.993466 | orchestrator | 2026-01-07 00:47:55 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:55.993481 | orchestrator | 2026-01-07 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:59.034341 | orchestrator | 2026-01-07 00:47:59 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:47:59.034544 | orchestrator | 2026-01-07 00:47:59 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:47:59.034984 | orchestrator | 2026-01-07 00:47:59 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:47:59.035552 | orchestrator | 2026-01-07 00:47:59 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:47:59.036316 | orchestrator | 2026-01-07 00:47:59 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:47:59.036346 | orchestrator | 2026-01-07 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:02.066975 | orchestrator | 2026-01-07 00:48:02 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:02.067348 | orchestrator | 2026-01-07 00:48:02 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:02.067976 | orchestrator | 2026-01-07 00:48:02 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:02.069436 | orchestrator | 2026-01-07 00:48:02 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:02.071231 | orchestrator | 2026-01-07 00:48:02 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:02.071280 | orchestrator | 2026-01-07 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:05.105575 | orchestrator | 2026-01-07 00:48:05 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:05.105889 | orchestrator | 2026-01-07 00:48:05 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:05.107280 | orchestrator | 2026-01-07 00:48:05 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:05.108253 | orchestrator | 2026-01-07 00:48:05 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:05.108817 | orchestrator | 2026-01-07 00:48:05 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:05.108839 | orchestrator | 2026-01-07 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:08.144833 | orchestrator | 2026-01-07 00:48:08 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:08.145102 | orchestrator | 2026-01-07 00:48:08 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:08.145826 | orchestrator | 2026-01-07 00:48:08 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:08.147509 | orchestrator | 2026-01-07 00:48:08 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:08.153507 | orchestrator | 2026-01-07 00:48:08 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:08.153605 | orchestrator | 2026-01-07 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:11.182075 | orchestrator | 2026-01-07 00:48:11 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:11.185893 | orchestrator | 2026-01-07 00:48:11 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:11.186683 | orchestrator | 2026-01-07 00:48:11 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:11.188188 | orchestrator | 2026-01-07 00:48:11 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:11.189800 | orchestrator | 2026-01-07 00:48:11 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:11.190110 | orchestrator | 2026-01-07 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:14.336210 | orchestrator | 2026-01-07 00:48:14 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:14.336328 | orchestrator | 2026-01-07 00:48:14 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:14.336363 | orchestrator | 2026-01-07 00:48:14 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:14.336368 | orchestrator | 2026-01-07 00:48:14 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:14.336372 | orchestrator | 2026-01-07 00:48:14 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:14.336376 | orchestrator | 2026-01-07 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:17.324577 | orchestrator | 2026-01-07 00:48:17 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:17.327799 | orchestrator | 2026-01-07 00:48:17 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:17.329905 | orchestrator | 2026-01-07 00:48:17 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:17.330524 | orchestrator | 2026-01-07 00:48:17 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:17.331145 | orchestrator | 2026-01-07 00:48:17 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:17.331211 | orchestrator | 2026-01-07 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:20.411300 | orchestrator | 2026-01-07 00:48:20 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:20.411412 | orchestrator | 2026-01-07 00:48:20 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:20.411423 | orchestrator | 2026-01-07 00:48:20 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:20.411430 | orchestrator | 2026-01-07 00:48:20 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:20.413539 | orchestrator | 2026-01-07 00:48:20 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:20.413611 | orchestrator | 2026-01-07 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:23.446486 | orchestrator | 2026-01-07 00:48:23 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:23.446579 | orchestrator | 2026-01-07 00:48:23 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:23.448186 | orchestrator | 2026-01-07 00:48:23 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:23.448674 | orchestrator | 2026-01-07 00:48:23 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:23.449617 | orchestrator | 2026-01-07 00:48:23 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:23.449698 | orchestrator | 2026-01-07 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:26.525430 | orchestrator | 2026-01-07 00:48:26 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:26.527783 | orchestrator | 2026-01-07 00:48:26 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state STARTED 2026-01-07 00:48:26.530505 | orchestrator | 2026-01-07 00:48:26 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:26.533029 | orchestrator | 2026-01-07 00:48:26 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:26.535248 | orchestrator | 2026-01-07 00:48:26 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:26.535303 | orchestrator | 2026-01-07 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:29.575375 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:29.578777 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task db69ed5e-722c-49e3-ad48-7ccf0471ab4a is in state STARTED 2026-01-07 00:48:29.583169 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task 2ebf6d30-9702-4aa7-a8ca-679d1e0a20f1 is in state SUCCESS 2026-01-07 00:48:29.585500 | orchestrator | 2026-01-07 00:48:29.585561 | orchestrator | 2026-01-07 00:48:29.585579 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-07 00:48:29.585605 | orchestrator | 2026-01-07 00:48:29.585633 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-07 00:48:29.585652 | orchestrator | Wednesday 07 January 2026 00:44:21 +0000 (0:00:00.200) 0:00:00.200 ***** 2026-01-07 00:48:29.585670 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.585690 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.585707 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.585723 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.585742 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.585759 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.585778 | orchestrator | 2026-01-07 00:48:29.585797 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-07 00:48:29.585815 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.854) 0:00:01.054 ***** 2026-01-07 00:48:29.585833 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.585852 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.585870 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.585890 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.585939 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.585962 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.585989 | orchestrator | 2026-01-07 00:48:29.586006 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-07 00:48:29.586104 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.771) 0:00:01.826 ***** 2026-01-07 00:48:29.586117 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.586130 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.586142 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.586155 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.586168 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.586180 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.586196 | orchestrator | 2026-01-07 00:48:29.586215 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-07 00:48:29.586246 | orchestrator | Wednesday 07 January 2026 00:44:24 +0000 (0:00:01.051) 0:00:02.877 ***** 2026-01-07 00:48:29.586264 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.586281 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.586299 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.586316 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.586334 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.586442 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.586467 | orchestrator | 2026-01-07 00:48:29.586480 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-07 00:48:29.586492 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:02.408) 0:00:05.286 ***** 2026-01-07 00:48:29.586502 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.586513 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.586524 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.586535 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.586546 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.586557 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.586568 | orchestrator | 2026-01-07 00:48:29.586579 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-07 00:48:29.586590 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:01.199) 0:00:06.485 ***** 2026-01-07 00:48:29.586601 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.586854 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.586947 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.586959 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.586966 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.586974 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.586981 | orchestrator | 2026-01-07 00:48:29.586991 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-07 00:48:29.587000 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.975) 0:00:07.461 ***** 2026-01-07 00:48:29.587007 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587013 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587020 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587027 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587033 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587040 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587047 | orchestrator | 2026-01-07 00:48:29.587054 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-07 00:48:29.587061 | orchestrator | Wednesday 07 January 2026 00:44:29 +0000 (0:00:00.763) 0:00:08.224 ***** 2026-01-07 00:48:29.587068 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587075 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587082 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587107 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587119 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587131 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587142 | orchestrator | 2026-01-07 00:48:29.587153 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-07 00:48:29.587163 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:00.822) 0:00:09.046 ***** 2026-01-07 00:48:29.587174 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587184 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587195 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587207 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587219 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587230 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587242 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587252 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587263 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587273 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587306 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587313 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587320 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587339 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587346 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587353 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:48:29.587359 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:48:29.587366 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587373 | orchestrator | 2026-01-07 00:48:29.587380 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-07 00:48:29.587386 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.854) 0:00:09.901 ***** 2026-01-07 00:48:29.587393 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587400 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587407 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587427 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587434 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587441 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587448 | orchestrator | 2026-01-07 00:48:29.587454 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-07 00:48:29.587462 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:01.689) 0:00:11.591 ***** 2026-01-07 00:48:29.587469 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.587478 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.587484 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.587491 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.587498 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.587504 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.587511 | orchestrator | 2026-01-07 00:48:29.587518 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-07 00:48:29.587524 | orchestrator | Wednesday 07 January 2026 00:44:33 +0000 (0:00:00.973) 0:00:12.565 ***** 2026-01-07 00:48:29.587531 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.587538 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.587545 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.587551 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.587558 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.587565 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.587572 | orchestrator | 2026-01-07 00:48:29.587578 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-07 00:48:29.587585 | orchestrator | Wednesday 07 January 2026 00:44:40 +0000 (0:00:06.309) 0:00:18.874 ***** 2026-01-07 00:48:29.587592 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587598 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587605 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587611 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587618 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587625 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587631 | orchestrator | 2026-01-07 00:48:29.587638 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-07 00:48:29.587645 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:01.385) 0:00:20.260 ***** 2026-01-07 00:48:29.587652 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587658 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587665 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587672 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587678 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587685 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587691 | orchestrator | 2026-01-07 00:48:29.587698 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-07 00:48:29.587706 | orchestrator | Wednesday 07 January 2026 00:44:43 +0000 (0:00:01.759) 0:00:22.020 ***** 2026-01-07 00:48:29.587713 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587720 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587726 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587733 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587746 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587753 | orchestrator | 2026-01-07 00:48:29.587759 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-07 00:48:29.587766 | orchestrator | Wednesday 07 January 2026 00:44:43 +0000 (0:00:00.744) 0:00:22.765 ***** 2026-01-07 00:48:29.587778 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-07 00:48:29.587786 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-07 00:48:29.587796 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-07 00:48:29.587807 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-07 00:48:29.587824 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.587842 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-07 00:48:29.587854 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-07 00:48:29.587864 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.587875 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-07 00:48:29.587887 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-07 00:48:29.587922 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.587935 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-07 00:48:29.587947 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-07 00:48:29.587958 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.587965 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.587971 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-07 00:48:29.587978 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-07 00:48:29.587984 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.587991 | orchestrator | 2026-01-07 00:48:29.587998 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-07 00:48:29.588013 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:01.129) 0:00:23.894 ***** 2026-01-07 00:48:29.588020 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.588026 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.588033 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.588040 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.588046 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.588053 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.588059 | orchestrator | 2026-01-07 00:48:29.588066 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-07 00:48:29.588073 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:00.955) 0:00:24.849 ***** 2026-01-07 00:48:29.588079 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.588086 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.588093 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.588099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.588243 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.588265 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.588277 | orchestrator | 2026-01-07 00:48:29.588342 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-07 00:48:29.588355 | orchestrator | 2026-01-07 00:48:29.588366 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-07 00:48:29.588378 | orchestrator | Wednesday 07 January 2026 00:44:47 +0000 (0:00:01.870) 0:00:26.720 ***** 2026-01-07 00:48:29.588410 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.588421 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.588432 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.588444 | orchestrator | 2026-01-07 00:48:29.588456 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-07 00:48:29.588468 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:01.417) 0:00:28.138 ***** 2026-01-07 00:48:29.588480 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.588503 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.588515 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.588528 | orchestrator | 2026-01-07 00:48:29.588541 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-07 00:48:29.588554 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:01.089) 0:00:29.227 ***** 2026-01-07 00:48:29.588566 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.588578 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.588590 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.588604 | orchestrator | 2026-01-07 00:48:29.588616 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-07 00:48:29.588629 | orchestrator | Wednesday 07 January 2026 00:44:51 +0000 (0:00:00.973) 0:00:30.200 ***** 2026-01-07 00:48:29.588654 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.588667 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.588679 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.588692 | orchestrator | 2026-01-07 00:48:29.588704 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-07 00:48:29.588716 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.990) 0:00:31.191 ***** 2026-01-07 00:48:29.588729 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.588743 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.588754 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.588766 | orchestrator | 2026-01-07 00:48:29.588779 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-07 00:48:29.588789 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.287) 0:00:31.478 ***** 2026-01-07 00:48:29.588801 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.588813 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.588826 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.588838 | orchestrator | 2026-01-07 00:48:29.588850 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-07 00:48:29.588862 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:00.837) 0:00:32.315 ***** 2026-01-07 00:48:29.588874 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.588887 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.588918 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.588931 | orchestrator | 2026-01-07 00:48:29.588938 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-07 00:48:29.588945 | orchestrator | Wednesday 07 January 2026 00:44:54 +0000 (0:00:01.491) 0:00:33.806 ***** 2026-01-07 00:48:29.588953 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:48:29.588960 | orchestrator | 2026-01-07 00:48:29.588968 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-07 00:48:29.588975 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.538) 0:00:34.344 ***** 2026-01-07 00:48:29.588990 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.588997 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589004 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589012 | orchestrator | 2026-01-07 00:48:29.589019 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-07 00:48:29.589026 | orchestrator | Wednesday 07 January 2026 00:44:57 +0000 (0:00:01.753) 0:00:36.098 ***** 2026-01-07 00:48:29.589033 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589040 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589048 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589055 | orchestrator | 2026-01-07 00:48:29.589062 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-07 00:48:29.589069 | orchestrator | Wednesday 07 January 2026 00:44:57 +0000 (0:00:00.563) 0:00:36.662 ***** 2026-01-07 00:48:29.589076 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589083 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589091 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589098 | orchestrator | 2026-01-07 00:48:29.589105 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-07 00:48:29.589112 | orchestrator | Wednesday 07 January 2026 00:44:58 +0000 (0:00:00.977) 0:00:37.639 ***** 2026-01-07 00:48:29.589119 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589126 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589134 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589141 | orchestrator | 2026-01-07 00:48:29.589148 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-07 00:48:29.589167 | orchestrator | Wednesday 07 January 2026 00:45:00 +0000 (0:00:01.400) 0:00:39.040 ***** 2026-01-07 00:48:29.589175 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.589198 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589205 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589212 | orchestrator | 2026-01-07 00:48:29.589219 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-07 00:48:29.589226 | orchestrator | Wednesday 07 January 2026 00:45:00 +0000 (0:00:00.352) 0:00:39.393 ***** 2026-01-07 00:48:29.589234 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.589241 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589248 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589255 | orchestrator | 2026-01-07 00:48:29.589262 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-07 00:48:29.589270 | orchestrator | Wednesday 07 January 2026 00:45:00 +0000 (0:00:00.223) 0:00:39.617 ***** 2026-01-07 00:48:29.589277 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589284 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.589291 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.589298 | orchestrator | 2026-01-07 00:48:29.589305 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-07 00:48:29.589313 | orchestrator | Wednesday 07 January 2026 00:45:01 +0000 (0:00:01.170) 0:00:40.787 ***** 2026-01-07 00:48:29.589320 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589327 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.589334 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589341 | orchestrator | 2026-01-07 00:48:29.589348 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-07 00:48:29.589356 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:02.290) 0:00:43.078 ***** 2026-01-07 00:48:29.589363 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589370 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.589377 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589384 | orchestrator | 2026-01-07 00:48:29.589393 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-07 00:48:29.589405 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:01.188) 0:00:44.267 ***** 2026-01-07 00:48:29.589417 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:48:29.589430 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:48:29.589441 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:48:29.589454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:48:29.589466 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:48:29.589479 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:48:29.589490 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:48:29.589504 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:48:29.589511 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:48:29.589519 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:48:29.589526 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:48:29.589538 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:48:29.589551 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589559 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.589566 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589573 | orchestrator | 2026-01-07 00:48:29.589580 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-07 00:48:29.589587 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:43.350) 0:01:27.617 ***** 2026-01-07 00:48:29.589594 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.589602 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.589609 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.589616 | orchestrator | 2026-01-07 00:48:29.589623 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-07 00:48:29.589630 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.386) 0:01:28.003 ***** 2026-01-07 00:48:29.589637 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589645 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.589652 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.589659 | orchestrator | 2026-01-07 00:48:29.589666 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-07 00:48:29.589673 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:01.085) 0:01:29.089 ***** 2026-01-07 00:48:29.589681 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589688 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.589695 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.589702 | orchestrator | 2026-01-07 00:48:29.589715 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-07 00:48:29.589723 | orchestrator | Wednesday 07 January 2026 00:45:51 +0000 (0:00:01.154) 0:01:30.243 ***** 2026-01-07 00:48:29.589730 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.589737 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.589744 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.589751 | orchestrator | 2026-01-07 00:48:29.589759 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-07 00:48:29.589766 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:23.955) 0:01:54.198 ***** 2026-01-07 00:48:29.589773 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589780 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.589791 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589803 | orchestrator | 2026-01-07 00:48:29.589817 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-07 00:48:29.589829 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.670) 0:01:54.869 ***** 2026-01-07 00:48:29.589840 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.589851 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.589863 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.589876 | orchestrator | 2026-01-07 00:48:29.589888 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-07 00:48:29.589988 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.544) 0:01:55.414 ***** 2026-01-07 00:48:29.590010 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.590074 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.590087 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.590099 | orchestrator | 2026-01-07 00:48:29.590110 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-07 00:48:29.590121 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.528) 0:01:55.942 ***** 2026-01-07 00:48:29.590133 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.590146 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.590157 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.590168 | orchestrator | 2026-01-07 00:48:29.590180 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-07 00:48:29.590193 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.719) 0:01:56.661 ***** 2026-01-07 00:48:29.590218 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.590231 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.590242 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.590253 | orchestrator | 2026-01-07 00:48:29.590266 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-07 00:48:29.590279 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.272) 0:01:56.934 ***** 2026-01-07 00:48:29.590291 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.590303 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.590316 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.590329 | orchestrator | 2026-01-07 00:48:29.590340 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-07 00:48:29.590353 | orchestrator | Wednesday 07 January 2026 00:46:18 +0000 (0:00:00.533) 0:01:57.467 ***** 2026-01-07 00:48:29.590365 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.590377 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.590389 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.590401 | orchestrator | 2026-01-07 00:48:29.590495 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-07 00:48:29.590504 | orchestrator | Wednesday 07 January 2026 00:46:19 +0000 (0:00:00.520) 0:01:57.987 ***** 2026-01-07 00:48:29.590511 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.590519 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.590526 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.590533 | orchestrator | 2026-01-07 00:48:29.590540 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-07 00:48:29.590548 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.996) 0:01:58.984 ***** 2026-01-07 00:48:29.590556 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:29.590568 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:29.590580 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:29.590592 | orchestrator | 2026-01-07 00:48:29.590604 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-07 00:48:29.590616 | orchestrator | Wednesday 07 January 2026 00:46:20 +0000 (0:00:00.693) 0:01:59.678 ***** 2026-01-07 00:48:29.590628 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.590641 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.590653 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.590665 | orchestrator | 2026-01-07 00:48:29.590678 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-07 00:48:29.590699 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.271) 0:01:59.949 ***** 2026-01-07 00:48:29.590713 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.590725 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.590737 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.590749 | orchestrator | 2026-01-07 00:48:29.590762 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-07 00:48:29.590773 | orchestrator | Wednesday 07 January 2026 00:46:21 +0000 (0:00:00.289) 0:02:00.238 ***** 2026-01-07 00:48:29.590786 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.590797 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.590808 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.590820 | orchestrator | 2026-01-07 00:48:29.590831 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-07 00:48:29.590843 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.774) 0:02:01.012 ***** 2026-01-07 00:48:29.590856 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.590868 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.590880 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.590892 | orchestrator | 2026-01-07 00:48:29.590926 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-07 00:48:29.590937 | orchestrator | Wednesday 07 January 2026 00:46:22 +0000 (0:00:00.580) 0:02:01.593 ***** 2026-01-07 00:48:29.590945 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:48:29.590978 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:48:29.590991 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:48:29.591003 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:48:29.591015 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:48:29.591025 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:48:29.591036 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:48:29.591049 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:48:29.591060 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:48:29.591072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-07 00:48:29.591084 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:48:29.591096 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:48:29.591108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-07 00:48:29.591121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:48:29.591133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:48:29.591144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:48:29.591156 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:48:29.591168 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:48:29.591180 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:48:29.591191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:48:29.591202 | orchestrator | 2026-01-07 00:48:29.591213 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-07 00:48:29.591224 | orchestrator | 2026-01-07 00:48:29.591236 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-07 00:48:29.591247 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:02.596) 0:02:04.189 ***** 2026-01-07 00:48:29.591258 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.591269 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.591284 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.591295 | orchestrator | 2026-01-07 00:48:29.591307 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-07 00:48:29.591320 | orchestrator | Wednesday 07 January 2026 00:46:25 +0000 (0:00:00.451) 0:02:04.640 ***** 2026-01-07 00:48:29.591332 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.591344 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.591356 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.591368 | orchestrator | 2026-01-07 00:48:29.591379 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-07 00:48:29.591391 | orchestrator | Wednesday 07 January 2026 00:46:26 +0000 (0:00:00.556) 0:02:05.197 ***** 2026-01-07 00:48:29.591435 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.591468 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.591481 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.591493 | orchestrator | 2026-01-07 00:48:29.591505 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-07 00:48:29.591611 | orchestrator | Wednesday 07 January 2026 00:46:26 +0000 (0:00:00.297) 0:02:05.494 ***** 2026-01-07 00:48:29.591625 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:48:29.591636 | orchestrator | 2026-01-07 00:48:29.591646 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-07 00:48:29.591667 | orchestrator | Wednesday 07 January 2026 00:46:27 +0000 (0:00:00.579) 0:02:06.074 ***** 2026-01-07 00:48:29.591675 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.591682 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.591690 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.591697 | orchestrator | 2026-01-07 00:48:29.591704 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-07 00:48:29.591712 | orchestrator | Wednesday 07 January 2026 00:46:27 +0000 (0:00:00.275) 0:02:06.349 ***** 2026-01-07 00:48:29.591719 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.591726 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.591733 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.591741 | orchestrator | 2026-01-07 00:48:29.591748 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-07 00:48:29.591755 | orchestrator | Wednesday 07 January 2026 00:46:27 +0000 (0:00:00.286) 0:02:06.636 ***** 2026-01-07 00:48:29.591762 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.591770 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.591777 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.591784 | orchestrator | 2026-01-07 00:48:29.591791 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-07 00:48:29.591799 | orchestrator | Wednesday 07 January 2026 00:46:28 +0000 (0:00:00.276) 0:02:06.913 ***** 2026-01-07 00:48:29.591806 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.591813 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.591820 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.591828 | orchestrator | 2026-01-07 00:48:29.591846 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-07 00:48:29.591889 | orchestrator | Wednesday 07 January 2026 00:46:28 +0000 (0:00:00.764) 0:02:07.677 ***** 2026-01-07 00:48:29.591898 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.591932 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.591940 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.591947 | orchestrator | 2026-01-07 00:48:29.591955 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-07 00:48:29.591962 | orchestrator | Wednesday 07 January 2026 00:46:29 +0000 (0:00:01.046) 0:02:08.724 ***** 2026-01-07 00:48:29.591970 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.591977 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.591984 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.591992 | orchestrator | 2026-01-07 00:48:29.591999 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-07 00:48:29.592006 | orchestrator | Wednesday 07 January 2026 00:46:31 +0000 (0:00:01.174) 0:02:09.899 ***** 2026-01-07 00:48:29.592014 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:29.592021 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:29.592029 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:29.592042 | orchestrator | 2026-01-07 00:48:29.592050 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:48:29.592057 | orchestrator | 2026-01-07 00:48:29.592065 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:48:29.592072 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:10.415) 0:02:20.314 ***** 2026-01-07 00:48:29.592079 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592087 | orchestrator | 2026-01-07 00:48:29.592094 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:48:29.592101 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.811) 0:02:21.126 ***** 2026-01-07 00:48:29.592116 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592123 | orchestrator | 2026-01-07 00:48:29.592130 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:48:29.592138 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:00.464) 0:02:21.590 ***** 2026-01-07 00:48:29.592145 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:48:29.592152 | orchestrator | 2026-01-07 00:48:29.592159 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:48:29.592167 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.505) 0:02:22.096 ***** 2026-01-07 00:48:29.592174 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592181 | orchestrator | 2026-01-07 00:48:29.592188 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:48:29.592200 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.600) 0:02:22.696 ***** 2026-01-07 00:48:29.592212 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592223 | orchestrator | 2026-01-07 00:48:29.592235 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:48:29.592246 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:00.491) 0:02:23.188 ***** 2026-01-07 00:48:29.592254 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:48:29.592261 | orchestrator | 2026-01-07 00:48:29.592268 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:48:29.592276 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:01.304) 0:02:24.493 ***** 2026-01-07 00:48:29.592283 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:48:29.592290 | orchestrator | 2026-01-07 00:48:29.592298 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:48:29.592305 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:00.657) 0:02:25.150 ***** 2026-01-07 00:48:29.592312 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592320 | orchestrator | 2026-01-07 00:48:29.592374 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:48:29.592382 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:00.317) 0:02:25.468 ***** 2026-01-07 00:48:29.592390 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592397 | orchestrator | 2026-01-07 00:48:29.592404 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-07 00:48:29.592411 | orchestrator | 2026-01-07 00:48:29.592418 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-07 00:48:29.592426 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.566) 0:02:26.034 ***** 2026-01-07 00:48:29.592438 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592450 | orchestrator | 2026-01-07 00:48:29.592468 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-07 00:48:29.592480 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.098) 0:02:26.133 ***** 2026-01-07 00:48:29.592492 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:48:29.592504 | orchestrator | 2026-01-07 00:48:29.592516 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-07 00:48:29.592527 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.176) 0:02:26.309 ***** 2026-01-07 00:48:29.592534 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592542 | orchestrator | 2026-01-07 00:48:29.592549 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-07 00:48:29.592556 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.745) 0:02:27.055 ***** 2026-01-07 00:48:29.592563 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592570 | orchestrator | 2026-01-07 00:48:29.592577 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-07 00:48:29.592584 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:02.420) 0:02:29.475 ***** 2026-01-07 00:48:29.592622 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592630 | orchestrator | 2026-01-07 00:48:29.592637 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-07 00:48:29.592649 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.745) 0:02:30.220 ***** 2026-01-07 00:48:29.592660 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592679 | orchestrator | 2026-01-07 00:48:29.592702 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-07 00:48:29.592715 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.357) 0:02:30.578 ***** 2026-01-07 00:48:29.592726 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592737 | orchestrator | 2026-01-07 00:48:29.592748 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-07 00:48:29.592759 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:06.364) 0:02:36.942 ***** 2026-01-07 00:48:29.592770 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.592781 | orchestrator | 2026-01-07 00:48:29.592792 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-07 00:48:29.592804 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:13.285) 0:02:50.227 ***** 2026-01-07 00:48:29.592815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.592826 | orchestrator | 2026-01-07 00:48:29.592839 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-07 00:48:29.592850 | orchestrator | 2026-01-07 00:48:29.592862 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-07 00:48:29.592874 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:00.459) 0:02:50.687 ***** 2026-01-07 00:48:29.592887 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.592956 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.592967 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.592974 | orchestrator | 2026-01-07 00:48:29.592982 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-07 00:48:29.592989 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.266) 0:02:50.953 ***** 2026-01-07 00:48:29.592996 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593004 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.593011 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.593019 | orchestrator | 2026-01-07 00:48:29.593026 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-07 00:48:29.593033 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.454) 0:02:51.408 ***** 2026-01-07 00:48:29.593041 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:48:29.593048 | orchestrator | 2026-01-07 00:48:29.593055 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-07 00:48:29.593063 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.599) 0:02:52.008 ***** 2026-01-07 00:48:29.593070 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593078 | orchestrator | 2026-01-07 00:48:29.593085 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-07 00:48:29.593092 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.787) 0:02:52.795 ***** 2026-01-07 00:48:29.593100 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593107 | orchestrator | 2026-01-07 00:48:29.593115 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-07 00:48:29.593122 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.920) 0:02:53.715 ***** 2026-01-07 00:48:29.593129 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593137 | orchestrator | 2026-01-07 00:48:29.593144 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-07 00:48:29.593151 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.107) 0:02:53.823 ***** 2026-01-07 00:48:29.593158 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593166 | orchestrator | 2026-01-07 00:48:29.593173 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-07 00:48:29.593188 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.792) 0:02:54.615 ***** 2026-01-07 00:48:29.593195 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593203 | orchestrator | 2026-01-07 00:48:29.593210 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-07 00:48:29.593221 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.092) 0:02:54.707 ***** 2026-01-07 00:48:29.593233 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593244 | orchestrator | 2026-01-07 00:48:29.593256 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-07 00:48:29.593269 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.100) 0:02:54.808 ***** 2026-01-07 00:48:29.593280 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593291 | orchestrator | 2026-01-07 00:48:29.593303 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-07 00:48:29.593315 | orchestrator | Wednesday 07 January 2026 00:47:16 +0000 (0:00:00.121) 0:02:54.930 ***** 2026-01-07 00:48:29.593327 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593337 | orchestrator | 2026-01-07 00:48:29.593345 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-07 00:48:29.593352 | orchestrator | Wednesday 07 January 2026 00:47:16 +0000 (0:00:00.104) 0:02:55.034 ***** 2026-01-07 00:48:29.593359 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593367 | orchestrator | 2026-01-07 00:48:29.593374 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-07 00:48:29.593454 | orchestrator | Wednesday 07 January 2026 00:47:20 +0000 (0:00:04.670) 0:02:59.705 ***** 2026-01-07 00:48:29.593482 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-07 00:48:29.593493 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-07 00:48:29.593505 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-07 00:48:29.593516 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-07 00:48:29.593528 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-07 00:48:29.593539 | orchestrator | 2026-01-07 00:48:29.593550 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-07 00:48:29.593561 | orchestrator | Wednesday 07 January 2026 00:48:03 +0000 (0:00:42.250) 0:03:41.955 ***** 2026-01-07 00:48:29.593583 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593590 | orchestrator | 2026-01-07 00:48:29.593597 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-07 00:48:29.593604 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:01.036) 0:03:42.991 ***** 2026-01-07 00:48:29.593611 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593617 | orchestrator | 2026-01-07 00:48:29.593624 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-07 00:48:29.593631 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:01.406) 0:03:44.398 ***** 2026-01-07 00:48:29.593638 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:48:29.593645 | orchestrator | 2026-01-07 00:48:29.593651 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-07 00:48:29.593658 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:01.060) 0:03:45.459 ***** 2026-01-07 00:48:29.593665 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593671 | orchestrator | 2026-01-07 00:48:29.593678 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-07 00:48:29.593685 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:00.116) 0:03:45.576 ***** 2026-01-07 00:48:29.593691 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-07 00:48:29.593698 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-07 00:48:29.593712 | orchestrator | 2026-01-07 00:48:29.593719 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-07 00:48:29.593726 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:01.593) 0:03:47.170 ***** 2026-01-07 00:48:29.593732 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.593739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.593746 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.593752 | orchestrator | 2026-01-07 00:48:29.593759 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-07 00:48:29.593816 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:00.281) 0:03:47.451 ***** 2026-01-07 00:48:29.593823 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.593830 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.593837 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.593844 | orchestrator | 2026-01-07 00:48:29.593850 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-07 00:48:29.593857 | orchestrator | 2026-01-07 00:48:29.593864 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-07 00:48:29.593871 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.973) 0:03:48.424 ***** 2026-01-07 00:48:29.593877 | orchestrator | ok: [testbed-manager] 2026-01-07 00:48:29.593884 | orchestrator | 2026-01-07 00:48:29.593890 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-07 00:48:29.593897 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.117) 0:03:48.542 ***** 2026-01-07 00:48:29.593924 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:48:29.593934 | orchestrator | 2026-01-07 00:48:29.593945 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-07 00:48:29.593956 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.213) 0:03:48.755 ***** 2026-01-07 00:48:29.593968 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:29.593979 | orchestrator | 2026-01-07 00:48:29.593990 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-07 00:48:29.594000 | orchestrator | 2026-01-07 00:48:29.594265 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-07 00:48:29.594293 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:05.501) 0:03:54.257 ***** 2026-01-07 00:48:29.594305 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:48:29.594316 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:48:29.594381 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:48:29.594394 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:48:29.594405 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:48:29.594417 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:48:29.594430 | orchestrator | 2026-01-07 00:48:29.594441 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-07 00:48:29.594453 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:00.577) 0:03:54.835 ***** 2026-01-07 00:48:29.594465 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:48:29.594486 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:48:29.594497 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:48:29.594509 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:48:29.594520 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:48:29.594533 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:48:29.594544 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:48:29.594557 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:48:29.594580 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:48:29.594593 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:48:29.594605 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:48:29.594616 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:48:29.594640 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:48:29.594651 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:48:29.594662 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:48:29.594673 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:48:29.594684 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:48:29.594695 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:48:29.594706 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:48:29.594717 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:48:29.594766 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:48:29.594780 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:48:29.594791 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:48:29.594803 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:48:29.594815 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:48:29.594826 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:48:29.594837 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:48:29.594848 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:48:29.594858 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:48:29.594865 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:48:29.594872 | orchestrator | 2026-01-07 00:48:29.594879 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-07 00:48:29.594887 | orchestrator | Wednesday 07 January 2026 00:48:26 +0000 (0:00:10.160) 0:04:04.995 ***** 2026-01-07 00:48:29.594894 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.594966 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.594980 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.594989 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.595000 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.595012 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.595023 | orchestrator | 2026-01-07 00:48:29.595036 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-07 00:48:29.595049 | orchestrator | Wednesday 07 January 2026 00:48:26 +0000 (0:00:00.503) 0:04:05.499 ***** 2026-01-07 00:48:29.595061 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:48:29.595075 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:48:29.595087 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:48:29.595099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:48:29.595110 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:48:29.595118 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:48:29.595125 | orchestrator | 2026-01-07 00:48:29.595133 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:48:29.595143 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:29.595169 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 00:48:29.595185 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:48:29.595197 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:48:29.595217 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:48:29.595229 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:48:29.595240 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:48:29.595251 | orchestrator | 2026-01-07 00:48:29.595263 | orchestrator | 2026-01-07 00:48:29.595274 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:48:29.595285 | orchestrator | Wednesday 07 January 2026 00:48:26 +0000 (0:00:00.334) 0:04:05.834 ***** 2026-01-07 00:48:29.595297 | orchestrator | =============================================================================== 2026-01-07 00:48:29.595308 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.35s 2026-01-07 00:48:29.595318 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.25s 2026-01-07 00:48:29.595329 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.96s 2026-01-07 00:48:29.595353 | orchestrator | kubectl : Install required packages ------------------------------------ 13.29s 2026-01-07 00:48:29.595367 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.42s 2026-01-07 00:48:29.595380 | orchestrator | Manage labels ---------------------------------------------------------- 10.16s 2026-01-07 00:48:29.595394 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.36s 2026-01-07 00:48:29.595407 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.31s 2026-01-07 00:48:29.595420 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.50s 2026-01-07 00:48:29.595434 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.67s 2026-01-07 00:48:29.595447 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.60s 2026-01-07 00:48:29.595459 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.42s 2026-01-07 00:48:29.595472 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.41s 2026-01-07 00:48:29.595485 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.29s 2026-01-07 00:48:29.595497 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.87s 2026-01-07 00:48:29.595510 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.76s 2026-01-07 00:48:29.595522 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.75s 2026-01-07 00:48:29.595534 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.69s 2026-01-07 00:48:29.595547 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.59s 2026-01-07 00:48:29.595559 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.49s 2026-01-07 00:48:29.595570 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task 2c3da700-2fac-467e-8970-371bf821a588 is in state STARTED 2026-01-07 00:48:29.595595 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:29.595607 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:29.595616 | orchestrator | 2026-01-07 00:48:29 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:29.595630 | orchestrator | 2026-01-07 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:32.777260 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:32.777548 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task db69ed5e-722c-49e3-ad48-7ccf0471ab4a is in state STARTED 2026-01-07 00:48:32.778336 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task 2c3da700-2fac-467e-8970-371bf821a588 is in state STARTED 2026-01-07 00:48:32.779337 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:32.780067 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:32.780859 | orchestrator | 2026-01-07 00:48:32 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:32.780970 | orchestrator | 2026-01-07 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:35.803200 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:35.803309 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task db69ed5e-722c-49e3-ad48-7ccf0471ab4a is in state SUCCESS 2026-01-07 00:48:35.803749 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 2c3da700-2fac-467e-8970-371bf821a588 is in state STARTED 2026-01-07 00:48:35.806311 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:35.806670 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:35.807442 | orchestrator | 2026-01-07 00:48:35 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:35.807494 | orchestrator | 2026-01-07 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:38.846797 | orchestrator | 2026-01-07 00:48:38 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:38.847536 | orchestrator | 2026-01-07 00:48:38 | INFO  | Task 2c3da700-2fac-467e-8970-371bf821a588 is in state SUCCESS 2026-01-07 00:48:38.849196 | orchestrator | 2026-01-07 00:48:38 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:38.850682 | orchestrator | 2026-01-07 00:48:38 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:38.852440 | orchestrator | 2026-01-07 00:48:38 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:38.852516 | orchestrator | 2026-01-07 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:41.889137 | orchestrator | 2026-01-07 00:48:41 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:41.889422 | orchestrator | 2026-01-07 00:48:41 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:41.890342 | orchestrator | 2026-01-07 00:48:41 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:41.893047 | orchestrator | 2026-01-07 00:48:41 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:41.893140 | orchestrator | 2026-01-07 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:44.927149 | orchestrator | 2026-01-07 00:48:44 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:44.927543 | orchestrator | 2026-01-07 00:48:44 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:44.928872 | orchestrator | 2026-01-07 00:48:44 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:44.930760 | orchestrator | 2026-01-07 00:48:44 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:44.930844 | orchestrator | 2026-01-07 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:47.958090 | orchestrator | 2026-01-07 00:48:47 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:47.958275 | orchestrator | 2026-01-07 00:48:47 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:47.959211 | orchestrator | 2026-01-07 00:48:47 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:47.959955 | orchestrator | 2026-01-07 00:48:47 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:47.960007 | orchestrator | 2026-01-07 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:50.987100 | orchestrator | 2026-01-07 00:48:50 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:50.988829 | orchestrator | 2026-01-07 00:48:50 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:50.990191 | orchestrator | 2026-01-07 00:48:50 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:50.991656 | orchestrator | 2026-01-07 00:48:50 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:50.991703 | orchestrator | 2026-01-07 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:54.034648 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:54.035424 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:54.036036 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:54.037116 | orchestrator | 2026-01-07 00:48:54 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:54.037143 | orchestrator | 2026-01-07 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:57.066324 | orchestrator | 2026-01-07 00:48:57 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:48:57.067731 | orchestrator | 2026-01-07 00:48:57 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:48:57.069361 | orchestrator | 2026-01-07 00:48:57 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:48:57.070889 | orchestrator | 2026-01-07 00:48:57 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:48:57.071080 | orchestrator | 2026-01-07 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:00.104262 | orchestrator | 2026-01-07 00:49:00 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:00.105252 | orchestrator | 2026-01-07 00:49:00 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:00.106691 | orchestrator | 2026-01-07 00:49:00 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:00.107909 | orchestrator | 2026-01-07 00:49:00 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:00.107944 | orchestrator | 2026-01-07 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:03.140830 | orchestrator | 2026-01-07 00:49:03 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:03.141498 | orchestrator | 2026-01-07 00:49:03 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:03.142731 | orchestrator | 2026-01-07 00:49:03 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:03.143998 | orchestrator | 2026-01-07 00:49:03 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:03.144045 | orchestrator | 2026-01-07 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:06.180689 | orchestrator | 2026-01-07 00:49:06 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:06.180855 | orchestrator | 2026-01-07 00:49:06 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:06.181546 | orchestrator | 2026-01-07 00:49:06 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:06.183218 | orchestrator | 2026-01-07 00:49:06 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:06.183247 | orchestrator | 2026-01-07 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:09.210953 | orchestrator | 2026-01-07 00:49:09 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:09.211081 | orchestrator | 2026-01-07 00:49:09 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:09.212190 | orchestrator | 2026-01-07 00:49:09 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:09.212618 | orchestrator | 2026-01-07 00:49:09 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:09.212747 | orchestrator | 2026-01-07 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:12.250076 | orchestrator | 2026-01-07 00:49:12 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:12.251582 | orchestrator | 2026-01-07 00:49:12 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:12.252198 | orchestrator | 2026-01-07 00:49:12 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:12.252974 | orchestrator | 2026-01-07 00:49:12 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:12.253004 | orchestrator | 2026-01-07 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:15.277534 | orchestrator | 2026-01-07 00:49:15 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:15.277935 | orchestrator | 2026-01-07 00:49:15 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:15.279046 | orchestrator | 2026-01-07 00:49:15 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:15.279680 | orchestrator | 2026-01-07 00:49:15 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:15.279725 | orchestrator | 2026-01-07 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:18.328203 | orchestrator | 2026-01-07 00:49:18 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:18.328697 | orchestrator | 2026-01-07 00:49:18 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:18.329356 | orchestrator | 2026-01-07 00:49:18 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:18.330041 | orchestrator | 2026-01-07 00:49:18 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:18.330126 | orchestrator | 2026-01-07 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:21.352064 | orchestrator | 2026-01-07 00:49:21 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:21.352983 | orchestrator | 2026-01-07 00:49:21 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:21.355188 | orchestrator | 2026-01-07 00:49:21 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:21.355225 | orchestrator | 2026-01-07 00:49:21 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:21.355230 | orchestrator | 2026-01-07 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:24.387803 | orchestrator | 2026-01-07 00:49:24 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:24.389409 | orchestrator | 2026-01-07 00:49:24 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:24.391428 | orchestrator | 2026-01-07 00:49:24 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:24.392949 | orchestrator | 2026-01-07 00:49:24 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:24.393121 | orchestrator | 2026-01-07 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:27.427763 | orchestrator | 2026-01-07 00:49:27 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:27.428774 | orchestrator | 2026-01-07 00:49:27 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:27.428960 | orchestrator | 2026-01-07 00:49:27 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:27.429680 | orchestrator | 2026-01-07 00:49:27 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:27.429761 | orchestrator | 2026-01-07 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:30.464061 | orchestrator | 2026-01-07 00:49:30 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:30.466394 | orchestrator | 2026-01-07 00:49:30 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:30.467159 | orchestrator | 2026-01-07 00:49:30 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:30.467942 | orchestrator | 2026-01-07 00:49:30 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:30.467962 | orchestrator | 2026-01-07 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:33.510175 | orchestrator | 2026-01-07 00:49:33 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:33.512566 | orchestrator | 2026-01-07 00:49:33 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:33.515082 | orchestrator | 2026-01-07 00:49:33 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:33.517463 | orchestrator | 2026-01-07 00:49:33 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:33.517500 | orchestrator | 2026-01-07 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:36.560990 | orchestrator | 2026-01-07 00:49:36 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:36.561043 | orchestrator | 2026-01-07 00:49:36 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:36.561048 | orchestrator | 2026-01-07 00:49:36 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:36.561052 | orchestrator | 2026-01-07 00:49:36 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:36.561057 | orchestrator | 2026-01-07 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:39.601764 | orchestrator | 2026-01-07 00:49:39 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:39.601924 | orchestrator | 2026-01-07 00:49:39 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:39.602807 | orchestrator | 2026-01-07 00:49:39 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:39.603530 | orchestrator | 2026-01-07 00:49:39 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:39.603584 | orchestrator | 2026-01-07 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:42.654506 | orchestrator | 2026-01-07 00:49:42 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:42.654557 | orchestrator | 2026-01-07 00:49:42 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:42.654563 | orchestrator | 2026-01-07 00:49:42 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:42.654568 | orchestrator | 2026-01-07 00:49:42 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:42.654572 | orchestrator | 2026-01-07 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:45.674490 | orchestrator | 2026-01-07 00:49:45 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:45.677527 | orchestrator | 2026-01-07 00:49:45 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:45.680593 | orchestrator | 2026-01-07 00:49:45 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:45.682507 | orchestrator | 2026-01-07 00:49:45 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:45.682714 | orchestrator | 2026-01-07 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:48.723333 | orchestrator | 2026-01-07 00:49:48 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:48.725121 | orchestrator | 2026-01-07 00:49:48 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:48.727083 | orchestrator | 2026-01-07 00:49:48 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:48.729326 | orchestrator | 2026-01-07 00:49:48 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:48.729406 | orchestrator | 2026-01-07 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:51.768862 | orchestrator | 2026-01-07 00:49:51 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:51.771365 | orchestrator | 2026-01-07 00:49:51 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:51.773122 | orchestrator | 2026-01-07 00:49:51 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:51.776940 | orchestrator | 2026-01-07 00:49:51 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:51.777005 | orchestrator | 2026-01-07 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:54.819624 | orchestrator | 2026-01-07 00:49:54 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:54.820579 | orchestrator | 2026-01-07 00:49:54 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:54.821237 | orchestrator | 2026-01-07 00:49:54 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:54.822948 | orchestrator | 2026-01-07 00:49:54 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:54.822991 | orchestrator | 2026-01-07 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:57.852619 | orchestrator | 2026-01-07 00:49:57 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:49:57.853024 | orchestrator | 2026-01-07 00:49:57 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:49:57.855229 | orchestrator | 2026-01-07 00:49:57 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:49:57.855771 | orchestrator | 2026-01-07 00:49:57 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:49:57.855821 | orchestrator | 2026-01-07 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:00.881075 | orchestrator | 2026-01-07 00:50:00 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:00.881308 | orchestrator | 2026-01-07 00:50:00 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:00.881889 | orchestrator | 2026-01-07 00:50:00 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:00.882904 | orchestrator | 2026-01-07 00:50:00 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:50:00.882937 | orchestrator | 2026-01-07 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:03.913471 | orchestrator | 2026-01-07 00:50:03 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:03.913835 | orchestrator | 2026-01-07 00:50:03 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:03.914861 | orchestrator | 2026-01-07 00:50:03 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:03.915631 | orchestrator | 2026-01-07 00:50:03 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:50:03.915670 | orchestrator | 2026-01-07 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:06.953670 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:06.955453 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:06.957540 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:06.958744 | orchestrator | 2026-01-07 00:50:06 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:50:06.958946 | orchestrator | 2026-01-07 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:09.993431 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:09.993730 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:09.994581 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:09.996597 | orchestrator | 2026-01-07 00:50:09 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:50:09.996668 | orchestrator | 2026-01-07 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:13.066141 | orchestrator | 2026-01-07 00:50:13 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:13.069533 | orchestrator | 2026-01-07 00:50:13 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:13.079666 | orchestrator | 2026-01-07 00:50:13 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:13.083864 | orchestrator | 2026-01-07 00:50:13 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state STARTED 2026-01-07 00:50:13.083982 | orchestrator | 2026-01-07 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:16.120602 | orchestrator | 2026-01-07 00:50:16 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:16.121252 | orchestrator | 2026-01-07 00:50:16 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:16.121856 | orchestrator | 2026-01-07 00:50:16 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:16.123235 | orchestrator | 2026-01-07 00:50:16 | INFO  | Task 1a94d345-6824-4bba-8ed5-9a773412a5f4 is in state SUCCESS 2026-01-07 00:50:16.123295 | orchestrator | 2026-01-07 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:16.125076 | orchestrator | 2026-01-07 00:50:16.125137 | orchestrator | 2026-01-07 00:50:16.125159 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-07 00:50:16.125178 | orchestrator | 2026-01-07 00:50:16.125197 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:50:16.125216 | orchestrator | Wednesday 07 January 2026 00:48:30 +0000 (0:00:00.112) 0:00:00.112 ***** 2026-01-07 00:50:16.125274 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:50:16.125351 | orchestrator | 2026-01-07 00:50:16.125370 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:50:16.125388 | orchestrator | Wednesday 07 January 2026 00:48:31 +0000 (0:00:00.727) 0:00:00.839 ***** 2026-01-07 00:50:16.125406 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:16.125424 | orchestrator | 2026-01-07 00:50:16.125442 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-07 00:50:16.125461 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:01.039) 0:00:01.879 ***** 2026-01-07 00:50:16.125479 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:16.125497 | orchestrator | 2026-01-07 00:50:16.125515 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:16.125554 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:16.125575 | orchestrator | 2026-01-07 00:50:16.125593 | orchestrator | 2026-01-07 00:50:16.125611 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:16.125629 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:00.413) 0:00:02.293 ***** 2026-01-07 00:50:16.125649 | orchestrator | =============================================================================== 2026-01-07 00:50:16.125667 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2026-01-07 00:50:16.125685 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-01-07 00:50:16.125720 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-01-07 00:50:16.125853 | orchestrator | 2026-01-07 00:50:16.125881 | orchestrator | 2026-01-07 00:50:16.125903 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:50:16.125922 | orchestrator | 2026-01-07 00:50:16.125941 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:50:16.125960 | orchestrator | Wednesday 07 January 2026 00:48:31 +0000 (0:00:00.151) 0:00:00.151 ***** 2026-01-07 00:50:16.125977 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:16.125996 | orchestrator | 2026-01-07 00:50:16.126015 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:50:16.126091 | orchestrator | Wednesday 07 January 2026 00:48:31 +0000 (0:00:00.681) 0:00:00.832 ***** 2026-01-07 00:50:16.126110 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:16.126126 | orchestrator | 2026-01-07 00:50:16.126146 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:50:16.126181 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:00.454) 0:00:01.287 ***** 2026-01-07 00:50:16.126201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:50:16.126219 | orchestrator | 2026-01-07 00:50:16.126237 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:50:16.126253 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:00.674) 0:00:01.961 ***** 2026-01-07 00:50:16.126269 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:16.126287 | orchestrator | 2026-01-07 00:50:16.126304 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:50:16.126321 | orchestrator | Wednesday 07 January 2026 00:48:34 +0000 (0:00:01.192) 0:00:03.154 ***** 2026-01-07 00:50:16.126338 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:16.126356 | orchestrator | 2026-01-07 00:50:16.126374 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:50:16.126392 | orchestrator | Wednesday 07 January 2026 00:48:34 +0000 (0:00:00.481) 0:00:03.635 ***** 2026-01-07 00:50:16.126409 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:50:16.126426 | orchestrator | 2026-01-07 00:50:16.126443 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:50:16.126460 | orchestrator | Wednesday 07 January 2026 00:48:35 +0000 (0:00:01.407) 0:00:05.042 ***** 2026-01-07 00:50:16.126478 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:50:16.126493 | orchestrator | 2026-01-07 00:50:16.126507 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:50:16.126523 | orchestrator | Wednesday 07 January 2026 00:48:36 +0000 (0:00:00.704) 0:00:05.747 ***** 2026-01-07 00:50:16.126539 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:16.126555 | orchestrator | 2026-01-07 00:50:16.126572 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:50:16.126589 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:00.406) 0:00:06.153 ***** 2026-01-07 00:50:16.126605 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:16.126622 | orchestrator | 2026-01-07 00:50:16.126639 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:16.126671 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:16.126688 | orchestrator | 2026-01-07 00:50:16.126705 | orchestrator | 2026-01-07 00:50:16.126723 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:16.126739 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:00.297) 0:00:06.451 ***** 2026-01-07 00:50:16.126755 | orchestrator | =============================================================================== 2026-01-07 00:50:16.126845 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.41s 2026-01-07 00:50:16.126866 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2026-01-07 00:50:16.126883 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.70s 2026-01-07 00:50:16.126946 | orchestrator | Get home directory of operator user ------------------------------------- 0.68s 2026-01-07 00:50:16.126965 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2026-01-07 00:50:16.126983 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-01-07 00:50:16.126996 | orchestrator | Create .kube directory -------------------------------------------------- 0.45s 2026-01-07 00:50:16.127013 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-01-07 00:50:16.127030 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-01-07 00:50:16.127048 | orchestrator | 2026-01-07 00:50:16.127064 | orchestrator | 2026-01-07 00:50:16.127080 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-07 00:50:16.127097 | orchestrator | 2026-01-07 00:50:16.127113 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 00:50:16.127129 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.109) 0:00:00.109 ***** 2026-01-07 00:50:16.127145 | orchestrator | ok: [localhost] => { 2026-01-07 00:50:16.127173 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-07 00:50:16.127191 | orchestrator | } 2026-01-07 00:50:16.127208 | orchestrator | 2026-01-07 00:50:16.127225 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-07 00:50:16.127241 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.030) 0:00:00.139 ***** 2026-01-07 00:50:16.127259 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-07 00:50:16.127277 | orchestrator | ...ignoring 2026-01-07 00:50:16.127294 | orchestrator | 2026-01-07 00:50:16.127311 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-07 00:50:16.127347 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:03.021) 0:00:03.161 ***** 2026-01-07 00:50:16.127364 | orchestrator | skipping: [localhost] 2026-01-07 00:50:16.127380 | orchestrator | 2026-01-07 00:50:16.127396 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-07 00:50:16.127413 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.101) 0:00:03.263 ***** 2026-01-07 00:50:16.127430 | orchestrator | ok: [localhost] 2026-01-07 00:50:16.127446 | orchestrator | 2026-01-07 00:50:16.127462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:50:16.127478 | orchestrator | 2026-01-07 00:50:16.127494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:50:16.127510 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.232) 0:00:03.495 ***** 2026-01-07 00:50:16.127526 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:16.127544 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:16.127561 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:16.127579 | orchestrator | 2026-01-07 00:50:16.127595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:50:16.127612 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.401) 0:00:03.896 ***** 2026-01-07 00:50:16.127628 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-07 00:50:16.127646 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-07 00:50:16.127663 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-07 00:50:16.127679 | orchestrator | 2026-01-07 00:50:16.127696 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-07 00:50:16.127712 | orchestrator | 2026-01-07 00:50:16.127728 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:50:16.127746 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.645) 0:00:04.542 ***** 2026-01-07 00:50:16.127763 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:16.127826 | orchestrator | 2026-01-07 00:50:16.127846 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:50:16.127864 | orchestrator | Wednesday 07 January 2026 00:47:09 +0000 (0:00:00.581) 0:00:05.123 ***** 2026-01-07 00:50:16.127881 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:16.127897 | orchestrator | 2026-01-07 00:50:16.127913 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-07 00:50:16.127930 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:01.644) 0:00:06.767 ***** 2026-01-07 00:50:16.127947 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.127964 | orchestrator | 2026-01-07 00:50:16.127981 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-07 00:50:16.127998 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:00.507) 0:00:07.275 ***** 2026-01-07 00:50:16.128015 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.128031 | orchestrator | 2026-01-07 00:50:16.128047 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-07 00:50:16.128063 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:00.306) 0:00:07.582 ***** 2026-01-07 00:50:16.128079 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.128095 | orchestrator | 2026-01-07 00:50:16.128110 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-07 00:50:16.128126 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:00.212) 0:00:07.794 ***** 2026-01-07 00:50:16.128141 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.128157 | orchestrator | 2026-01-07 00:50:16.128173 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:50:16.128189 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.466) 0:00:08.261 ***** 2026-01-07 00:50:16.128207 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:16.128223 | orchestrator | 2026-01-07 00:50:16.128239 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:50:16.128273 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.493) 0:00:08.754 ***** 2026-01-07 00:50:16.128291 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:16.128306 | orchestrator | 2026-01-07 00:50:16.128322 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-07 00:50:16.128336 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.715) 0:00:09.470 ***** 2026-01-07 00:50:16.128352 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.128367 | orchestrator | 2026-01-07 00:50:16.128382 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-07 00:50:16.128398 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.381) 0:00:09.851 ***** 2026-01-07 00:50:16.128414 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.128428 | orchestrator | 2026-01-07 00:50:16.128443 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-07 00:50:16.128459 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.414) 0:00:10.266 ***** 2026-01-07 00:50:16.128492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128581 | orchestrator | 2026-01-07 00:50:16.128597 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-07 00:50:16.128613 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.914) 0:00:11.180 ***** 2026-01-07 00:50:16.128664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.128732 | orchestrator | 2026-01-07 00:50:16.128749 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-07 00:50:16.128765 | orchestrator | Wednesday 07 January 2026 00:47:16 +0000 (0:00:01.394) 0:00:12.574 ***** 2026-01-07 00:50:16.128812 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:16.128829 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:16.128844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:16.128860 | orchestrator | 2026-01-07 00:50:16.128877 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-07 00:50:16.128894 | orchestrator | Wednesday 07 January 2026 00:47:18 +0000 (0:00:01.625) 0:00:14.200 ***** 2026-01-07 00:50:16.128909 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:50:16.128926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:50:16.128941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:50:16.128956 | orchestrator | 2026-01-07 00:50:16.128973 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-07 00:50:16.128998 | orchestrator | Wednesday 07 January 2026 00:47:20 +0000 (0:00:02.284) 0:00:16.484 ***** 2026-01-07 00:50:16.129015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:50:16.129032 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:50:16.129048 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:50:16.129064 | orchestrator | 2026-01-07 00:50:16.129081 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-07 00:50:16.129098 | orchestrator | Wednesday 07 January 2026 00:47:22 +0000 (0:00:02.273) 0:00:18.758 ***** 2026-01-07 00:50:16.129116 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:50:16.129132 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:50:16.129157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:50:16.129173 | orchestrator | 2026-01-07 00:50:16.129197 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-07 00:50:16.129214 | orchestrator | Wednesday 07 January 2026 00:47:25 +0000 (0:00:02.669) 0:00:21.427 ***** 2026-01-07 00:50:16.129231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:50:16.129248 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:50:16.129265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:50:16.129281 | orchestrator | 2026-01-07 00:50:16.129298 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-07 00:50:16.129315 | orchestrator | Wednesday 07 January 2026 00:47:26 +0000 (0:00:01.284) 0:00:22.711 ***** 2026-01-07 00:50:16.129332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:50:16.129348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:50:16.129364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:50:16.129379 | orchestrator | 2026-01-07 00:50:16.129396 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:50:16.129413 | orchestrator | Wednesday 07 January 2026 00:47:27 +0000 (0:00:01.358) 0:00:24.070 ***** 2026-01-07 00:50:16.129429 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:16.129445 | orchestrator | 2026-01-07 00:50:16.129461 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-07 00:50:16.129478 | orchestrator | Wednesday 07 January 2026 00:47:28 +0000 (0:00:00.530) 0:00:24.600 ***** 2026-01-07 00:50:16.129497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.129528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.129565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.129584 | orchestrator | 2026-01-07 00:50:16.129601 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-07 00:50:16.129617 | orchestrator | Wednesday 07 January 2026 00:47:29 +0000 (0:00:01.285) 0:00:25.886 ***** 2026-01-07 00:50:16.129634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129652 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.129670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129687 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:16.129715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129742 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:16.129758 | orchestrator | 2026-01-07 00:50:16.129809 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-07 00:50:16.129828 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.283) 0:00:26.170 ***** 2026-01-07 00:50:16.129845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129861 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.129917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129937 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:16.129955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.129984 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:16.130001 | orchestrator | 2026-01-07 00:50:16.130058 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-07 00:50:16.130090 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.647) 0:00:26.818 ***** 2026-01-07 00:50:16.130115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.130135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.130156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:50:16.130174 | orchestrator | 2026-01-07 00:50:16.130190 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-07 00:50:16.130218 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:01.435) 0:00:28.253 ***** 2026-01-07 00:50:16.130233 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:50:16.130250 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:50:16.130267 | orchestrator | } 2026-01-07 00:50:16.130283 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:50:16.130299 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:50:16.130315 | orchestrator | } 2026-01-07 00:50:16.130332 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:50:16.130347 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:50:16.130364 | orchestrator | } 2026-01-07 00:50:16.130379 | orchestrator | 2026-01-07 00:50:16.130395 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:50:16.130410 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:00.302) 0:00:28.555 ***** 2026-01-07 00:50:16.130439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.130466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.130486 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.130502 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:16.130519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:50:16.130548 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:16.130565 | orchestrator | 2026-01-07 00:50:16.130581 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-07 00:50:16.130598 | orchestrator | Wednesday 07 January 2026 00:47:33 +0000 (0:00:01.181) 0:00:29.737 ***** 2026-01-07 00:50:16.130614 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:16.130631 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:16.130646 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:16.130662 | orchestrator | 2026-01-07 00:50:16.130677 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-07 00:50:16.130692 | orchestrator | Wednesday 07 January 2026 00:47:34 +0000 (0:00:01.015) 0:00:30.752 ***** 2026-01-07 00:50:16.130708 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:16.130723 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:16.130739 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:16.130755 | orchestrator | 2026-01-07 00:50:16.130770 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-07 00:50:16.130810 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:08.876) 0:00:39.628 ***** 2026-01-07 00:50:16.130826 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:16.130843 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:16.130857 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:16.130873 | orchestrator | 2026-01-07 00:50:16.130889 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:50:16.130905 | orchestrator | 2026-01-07 00:50:16.130920 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:50:16.130947 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:00.490) 0:00:40.119 ***** 2026-01-07 00:50:16.130964 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:16.130982 | orchestrator | 2026-01-07 00:50:16.130997 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:50:16.131014 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:00.767) 0:00:40.887 ***** 2026-01-07 00:50:16.131031 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:16.131047 | orchestrator | 2026-01-07 00:50:16.131061 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:50:16.131078 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.282) 0:00:41.170 ***** 2026-01-07 00:50:16.131093 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:16.131109 | orchestrator | 2026-01-07 00:50:16.131126 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:50:16.131142 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:02.471) 0:00:43.641 ***** 2026-01-07 00:50:16.131158 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:16.131174 | orchestrator | 2026-01-07 00:50:16.131191 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:50:16.131206 | orchestrator | 2026-01-07 00:50:16.131220 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:50:16.131245 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:01:52.965) 0:02:36.607 ***** 2026-01-07 00:50:16.131258 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:16.131272 | orchestrator | 2026-01-07 00:50:16.131285 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:50:16.131297 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.649) 0:02:37.257 ***** 2026-01-07 00:50:16.131310 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:16.131323 | orchestrator | 2026-01-07 00:50:16.131336 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:50:16.131349 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:00.190) 0:02:37.447 ***** 2026-01-07 00:50:16.131379 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:16.131392 | orchestrator | 2026-01-07 00:50:16.131405 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:50:16.131417 | orchestrator | Wednesday 07 January 2026 00:49:47 +0000 (0:00:06.545) 0:02:43.993 ***** 2026-01-07 00:50:16.131431 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:16.131440 | orchestrator | 2026-01-07 00:50:16.131448 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:50:16.131455 | orchestrator | 2026-01-07 00:50:16.131463 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:50:16.131471 | orchestrator | Wednesday 07 January 2026 00:49:56 +0000 (0:00:08.872) 0:02:52.865 ***** 2026-01-07 00:50:16.131479 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:16.131487 | orchestrator | 2026-01-07 00:50:16.131495 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:50:16.131502 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.633) 0:02:53.498 ***** 2026-01-07 00:50:16.131510 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:16.131518 | orchestrator | 2026-01-07 00:50:16.131526 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:50:16.131533 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.113) 0:02:53.611 ***** 2026-01-07 00:50:16.131541 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:16.131549 | orchestrator | 2026-01-07 00:50:16.131557 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:50:16.131565 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:01.351) 0:02:54.962 ***** 2026-01-07 00:50:16.131572 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:16.131580 | orchestrator | 2026-01-07 00:50:16.131588 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-07 00:50:16.131595 | orchestrator | 2026-01-07 00:50:16.131604 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-07 00:50:16.131617 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:10.154) 0:03:05.117 ***** 2026-01-07 00:50:16.131630 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:16.131643 | orchestrator | 2026-01-07 00:50:16.131651 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-07 00:50:16.131659 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.866) 0:03:05.984 ***** 2026-01-07 00:50:16.131667 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:16.131674 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:16.131682 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:16.131694 | orchestrator | 2026-01-07 00:50:16.131708 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:16.131719 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 00:50:16.131729 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-07 00:50:16.131737 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:16.131745 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:16.131753 | orchestrator | 2026-01-07 00:50:16.131761 | orchestrator | 2026-01-07 00:50:16.131769 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:16.131970 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:03.355) 0:03:09.339 ***** 2026-01-07 00:50:16.131987 | orchestrator | =============================================================================== 2026-01-07 00:50:16.131995 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 131.99s 2026-01-07 00:50:16.132026 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.37s 2026-01-07 00:50:16.132035 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.88s 2026-01-07 00:50:16.132043 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.36s 2026-01-07 00:50:16.132050 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.02s 2026-01-07 00:50:16.132059 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.67s 2026-01-07 00:50:16.132066 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.28s 2026-01-07 00:50:16.132073 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.27s 2026-01-07 00:50:16.132079 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.05s 2026-01-07 00:50:16.132086 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.64s 2026-01-07 00:50:16.132092 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.63s 2026-01-07 00:50:16.132099 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.44s 2026-01-07 00:50:16.132124 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.39s 2026-01-07 00:50:16.132132 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.36s 2026-01-07 00:50:16.132139 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.29s 2026-01-07 00:50:16.132146 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.28s 2026-01-07 00:50:16.132152 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.18s 2026-01-07 00:50:16.132166 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2026-01-07 00:50:16.132174 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.91s 2026-01-07 00:50:16.132180 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.87s 2026-01-07 00:50:19.160344 | orchestrator | 2026-01-07 00:50:19 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:19.163455 | orchestrator | 2026-01-07 00:50:19 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:19.164893 | orchestrator | 2026-01-07 00:50:19 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:19.165228 | orchestrator | 2026-01-07 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:22.201054 | orchestrator | 2026-01-07 00:50:22 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:22.203488 | orchestrator | 2026-01-07 00:50:22 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:22.204795 | orchestrator | 2026-01-07 00:50:22 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:22.204840 | orchestrator | 2026-01-07 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:25.228202 | orchestrator | 2026-01-07 00:50:25 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:25.228901 | orchestrator | 2026-01-07 00:50:25 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:25.229368 | orchestrator | 2026-01-07 00:50:25 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:25.229439 | orchestrator | 2026-01-07 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:28.263220 | orchestrator | 2026-01-07 00:50:28 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:28.263896 | orchestrator | 2026-01-07 00:50:28 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:28.265523 | orchestrator | 2026-01-07 00:50:28 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:28.265555 | orchestrator | 2026-01-07 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:31.310619 | orchestrator | 2026-01-07 00:50:31 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:31.311529 | orchestrator | 2026-01-07 00:50:31 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:31.312613 | orchestrator | 2026-01-07 00:50:31 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:31.312637 | orchestrator | 2026-01-07 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:34.340921 | orchestrator | 2026-01-07 00:50:34 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:34.341344 | orchestrator | 2026-01-07 00:50:34 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:34.342149 | orchestrator | 2026-01-07 00:50:34 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:34.342240 | orchestrator | 2026-01-07 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:37.373642 | orchestrator | 2026-01-07 00:50:37 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:37.374445 | orchestrator | 2026-01-07 00:50:37 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:37.378647 | orchestrator | 2026-01-07 00:50:37 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:37.378705 | orchestrator | 2026-01-07 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:40.418398 | orchestrator | 2026-01-07 00:50:40 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:40.419480 | orchestrator | 2026-01-07 00:50:40 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:40.423122 | orchestrator | 2026-01-07 00:50:40 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:40.423175 | orchestrator | 2026-01-07 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:43.468386 | orchestrator | 2026-01-07 00:50:43 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:43.468597 | orchestrator | 2026-01-07 00:50:43 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:43.471246 | orchestrator | 2026-01-07 00:50:43 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:43.471314 | orchestrator | 2026-01-07 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:46.501476 | orchestrator | 2026-01-07 00:50:46 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:46.501860 | orchestrator | 2026-01-07 00:50:46 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:46.502569 | orchestrator | 2026-01-07 00:50:46 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:46.502585 | orchestrator | 2026-01-07 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:49.525653 | orchestrator | 2026-01-07 00:50:49 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:49.527437 | orchestrator | 2026-01-07 00:50:49 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:49.527536 | orchestrator | 2026-01-07 00:50:49 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:49.527629 | orchestrator | 2026-01-07 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:52.566796 | orchestrator | 2026-01-07 00:50:52 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:52.566847 | orchestrator | 2026-01-07 00:50:52 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:52.567434 | orchestrator | 2026-01-07 00:50:52 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:52.567449 | orchestrator | 2026-01-07 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:55.606070 | orchestrator | 2026-01-07 00:50:55 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:55.606786 | orchestrator | 2026-01-07 00:50:55 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:55.607905 | orchestrator | 2026-01-07 00:50:55 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:55.607969 | orchestrator | 2026-01-07 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:58.648966 | orchestrator | 2026-01-07 00:50:58 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:50:58.649445 | orchestrator | 2026-01-07 00:50:58 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:50:58.650428 | orchestrator | 2026-01-07 00:50:58 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:50:58.650450 | orchestrator | 2026-01-07 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:01.674478 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:01.678007 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:01.678407 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:51:01.678430 | orchestrator | 2026-01-07 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:04.716990 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:04.718939 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:04.720388 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:51:04.720443 | orchestrator | 2026-01-07 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:07.761993 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:07.764409 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:07.766771 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:51:07.766906 | orchestrator | 2026-01-07 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:10.804081 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:10.805002 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:10.806578 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:51:10.806641 | orchestrator | 2026-01-07 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:13.838222 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:13.838715 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:13.839575 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state STARTED 2026-01-07 00:51:13.839595 | orchestrator | 2026-01-07 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:16.890139 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:16.891284 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:16.894559 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 2296d581-997e-40da-a7ec-b7daaec5f0dd is in state SUCCESS 2026-01-07 00:51:16.896399 | orchestrator | 2026-01-07 00:51:16.896481 | orchestrator | 2026-01-07 00:51:16.896486 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:51:16.896490 | orchestrator | 2026-01-07 00:51:16.896494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:51:16.896497 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:00.142) 0:00:00.142 ***** 2026-01-07 00:51:16.896501 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.896504 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.896508 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.896511 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:51:16.896514 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:51:16.896517 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:51:16.896520 | orchestrator | 2026-01-07 00:51:16.896523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:51:16.896527 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:00.463) 0:00:00.605 ***** 2026-01-07 00:51:16.896530 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-07 00:51:16.896533 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-07 00:51:16.896536 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-07 00:51:16.896540 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-07 00:51:16.896543 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-07 00:51:16.896546 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-07 00:51:16.896549 | orchestrator | 2026-01-07 00:51:16.896552 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-07 00:51:16.896555 | orchestrator | 2026-01-07 00:51:16.896558 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-07 00:51:16.896561 | orchestrator | Wednesday 07 January 2026 00:47:59 +0000 (0:00:00.698) 0:00:01.304 ***** 2026-01-07 00:51:16.896588 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:51:16.896592 | orchestrator | 2026-01-07 00:51:16.896595 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-07 00:51:16.896598 | orchestrator | Wednesday 07 January 2026 00:48:00 +0000 (0:00:00.948) 0:00:02.252 ***** 2026-01-07 00:51:16.896620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896674 | orchestrator | 2026-01-07 00:51:16.896685 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-07 00:51:16.896688 | orchestrator | Wednesday 07 January 2026 00:48:01 +0000 (0:00:01.246) 0:00:03.499 ***** 2026-01-07 00:51:16.896692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896716 | orchestrator | 2026-01-07 00:51:16.896719 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-07 00:51:16.896723 | orchestrator | Wednesday 07 January 2026 00:48:03 +0000 (0:00:01.961) 0:00:05.461 ***** 2026-01-07 00:51:16.896726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896758 | orchestrator | 2026-01-07 00:51:16.896764 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-07 00:51:16.896767 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:00.899) 0:00:06.360 ***** 2026-01-07 00:51:16.896770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896791 | orchestrator | 2026-01-07 00:51:16.896796 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-07 00:51:16.896799 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:01.758) 0:00:08.118 ***** 2026-01-07 00:51:16.896802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.896878 | orchestrator | 2026-01-07 00:51:16.896881 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-07 00:51:16.896885 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:01.385) 0:00:09.504 ***** 2026-01-07 00:51:16.896888 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:51:16.896892 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896895 | orchestrator | } 2026-01-07 00:51:16.896899 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:51:16.896902 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896905 | orchestrator | } 2026-01-07 00:51:16.896908 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:51:16.896911 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896914 | orchestrator | } 2026-01-07 00:51:16.896917 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:51:16.896920 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896924 | orchestrator | } 2026-01-07 00:51:16.896927 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:51:16.896930 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896933 | orchestrator | } 2026-01-07 00:51:16.896936 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:51:16.896939 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.896942 | orchestrator | } 2026-01-07 00:51:16.896945 | orchestrator | 2026-01-07 00:51:16.896948 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:51:16.896952 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:00.577) 0:00:10.082 ***** 2026-01-07 00:51:16.896955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896967 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.896971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896974 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.896977 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.896980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896986 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:16.896990 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:16.896993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.896996 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:16.896999 | orchestrator | 2026-01-07 00:51:16.897002 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-07 00:51:16.897005 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.971) 0:00:11.053 ***** 2026-01-07 00:51:16.897009 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.897013 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.897016 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:16.897021 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.897025 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:16.897029 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:16.897033 | orchestrator | 2026-01-07 00:51:16.897036 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-07 00:51:16.897040 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:02.782) 0:00:13.836 ***** 2026-01-07 00:51:16.897044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-07 00:51:16.897047 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-07 00:51:16.897051 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-07 00:51:16.897055 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-07 00:51:16.897058 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-07 00:51:16.897062 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-07 00:51:16.897068 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897083 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:51:16.897092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897104 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897111 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-07 00:51:16.897115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897119 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897133 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:51:16.897137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897279 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897282 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897286 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:51:16.897290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897293 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897304 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897337 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:51:16.897343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:51:16.897353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:51:16.897357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:51:16.897360 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:51:16.897364 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:51:16.897367 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:51:16.897371 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-07 00:51:16.897375 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-07 00:51:16.897379 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-07 00:51:16.897388 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-07 00:51:16.897391 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-07 00:51:16.897397 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-07 00:51:16.897401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:51:16.897405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:51:16.897408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:51:16.897412 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:51:16.897417 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:51:16.897421 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:51:16.897427 | orchestrator | 2026-01-07 00:51:16.897448 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.897454 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:20.999) 0:00:34.836 ***** 2026-01-07 00:51:16.897459 | orchestrator | 2026-01-07 00:51:16.897464 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.897479 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:00.067) 0:00:34.904 ***** 2026-01-07 00:51:16.897484 | orchestrator | 2026-01-07 00:51:16.897489 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.897494 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.068) 0:00:34.972 ***** 2026-01-07 00:51:16.897849 | orchestrator | 2026-01-07 00:51:16.897872 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.897888 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.064) 0:00:35.036 ***** 2026-01-07 00:51:16.897893 | orchestrator | 2026-01-07 00:51:16.897909 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.898206 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.067) 0:00:35.103 ***** 2026-01-07 00:51:16.898217 | orchestrator | 2026-01-07 00:51:16.898222 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:51:16.898227 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.059) 0:00:35.163 ***** 2026-01-07 00:51:16.898231 | orchestrator | 2026-01-07 00:51:16.898236 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-07 00:51:16.898241 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.075) 0:00:35.239 ***** 2026-01-07 00:51:16.898246 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:51:16.898252 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898257 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898262 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:51:16.898267 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898272 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:51:16.898277 | orchestrator | 2026-01-07 00:51:16.898282 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-07 00:51:16.898286 | orchestrator | Wednesday 07 January 2026 00:48:35 +0000 (0:00:01.801) 0:00:37.040 ***** 2026-01-07 00:51:16.898292 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.898297 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.898302 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.898307 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:16.898312 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:16.898317 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:16.898322 | orchestrator | 2026-01-07 00:51:16.898332 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-07 00:51:16.898337 | orchestrator | 2026-01-07 00:51:16.898342 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:51:16.898347 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:05.089) 0:00:42.130 ***** 2026-01-07 00:51:16.898352 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:51:16.898358 | orchestrator | 2026-01-07 00:51:16.898362 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:51:16.898367 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:00.536) 0:00:42.667 ***** 2026-01-07 00:51:16.898373 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:51:16.898378 | orchestrator | 2026-01-07 00:51:16.898383 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-07 00:51:16.898388 | orchestrator | Wednesday 07 January 2026 00:48:41 +0000 (0:00:00.816) 0:00:43.484 ***** 2026-01-07 00:51:16.898394 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898399 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898404 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898410 | orchestrator | 2026-01-07 00:51:16.898414 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-07 00:51:16.898419 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.986) 0:00:44.470 ***** 2026-01-07 00:51:16.898425 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898430 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898435 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898440 | orchestrator | 2026-01-07 00:51:16.898445 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-07 00:51:16.898451 | orchestrator | Wednesday 07 January 2026 00:48:42 +0000 (0:00:00.395) 0:00:44.866 ***** 2026-01-07 00:51:16.898456 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898461 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898466 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898471 | orchestrator | 2026-01-07 00:51:16.898476 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-07 00:51:16.898526 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:00.656) 0:00:45.522 ***** 2026-01-07 00:51:16.898538 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898543 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898548 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898553 | orchestrator | 2026-01-07 00:51:16.898558 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-07 00:51:16.898563 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:00.271) 0:00:45.794 ***** 2026-01-07 00:51:16.898568 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898572 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898575 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898578 | orchestrator | 2026-01-07 00:51:16.898581 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-07 00:51:16.898585 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:00.311) 0:00:46.105 ***** 2026-01-07 00:51:16.898588 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898591 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898594 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898597 | orchestrator | 2026-01-07 00:51:16.898612 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-07 00:51:16.898616 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:00.296) 0:00:46.402 ***** 2026-01-07 00:51:16.898620 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898623 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898626 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898629 | orchestrator | 2026-01-07 00:51:16.898632 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-07 00:51:16.898636 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:00.361) 0:00:46.763 ***** 2026-01-07 00:51:16.898639 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898642 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898645 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898648 | orchestrator | 2026-01-07 00:51:16.898651 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-07 00:51:16.898655 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.266) 0:00:47.030 ***** 2026-01-07 00:51:16.898658 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898661 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898664 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898667 | orchestrator | 2026-01-07 00:51:16.898670 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-07 00:51:16.898673 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.268) 0:00:47.299 ***** 2026-01-07 00:51:16.898677 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898680 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898683 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898686 | orchestrator | 2026-01-07 00:51:16.898689 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-07 00:51:16.898692 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:00.263) 0:00:47.563 ***** 2026-01-07 00:51:16.898695 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898699 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898702 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898705 | orchestrator | 2026-01-07 00:51:16.898708 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-07 00:51:16.898711 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:00.404) 0:00:47.967 ***** 2026-01-07 00:51:16.898714 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898718 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898721 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898724 | orchestrator | 2026-01-07 00:51:16.898727 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-07 00:51:16.898730 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:00.309) 0:00:48.277 ***** 2026-01-07 00:51:16.898733 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898745 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898749 | orchestrator | 2026-01-07 00:51:16.898752 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-07 00:51:16.898755 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:00.297) 0:00:48.574 ***** 2026-01-07 00:51:16.898758 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898761 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898765 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898768 | orchestrator | 2026-01-07 00:51:16.898771 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-07 00:51:16.898774 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:00.288) 0:00:48.862 ***** 2026-01-07 00:51:16.898778 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898781 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898784 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898787 | orchestrator | 2026-01-07 00:51:16.898790 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-07 00:51:16.898793 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.257) 0:00:49.120 ***** 2026-01-07 00:51:16.898796 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898799 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898803 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898806 | orchestrator | 2026-01-07 00:51:16.898809 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-07 00:51:16.898812 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.469) 0:00:49.590 ***** 2026-01-07 00:51:16.898815 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898819 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898822 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898826 | orchestrator | 2026-01-07 00:51:16.898829 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:51:16.898833 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.301) 0:00:49.891 ***** 2026-01-07 00:51:16.898836 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:51:16.898840 | orchestrator | 2026-01-07 00:51:16.898847 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-07 00:51:16.898851 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:00.707) 0:00:50.599 ***** 2026-01-07 00:51:16.898854 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898858 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898861 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898865 | orchestrator | 2026-01-07 00:51:16.898868 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-07 00:51:16.898872 | orchestrator | Wednesday 07 January 2026 00:48:49 +0000 (0:00:00.586) 0:00:51.186 ***** 2026-01-07 00:51:16.898875 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.898879 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.898883 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.898886 | orchestrator | 2026-01-07 00:51:16.898890 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-07 00:51:16.898893 | orchestrator | Wednesday 07 January 2026 00:48:49 +0000 (0:00:00.428) 0:00:51.615 ***** 2026-01-07 00:51:16.898897 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898900 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898904 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898907 | orchestrator | 2026-01-07 00:51:16.898911 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-07 00:51:16.898914 | orchestrator | Wednesday 07 January 2026 00:48:49 +0000 (0:00:00.304) 0:00:51.920 ***** 2026-01-07 00:51:16.898918 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898921 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898925 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898931 | orchestrator | 2026-01-07 00:51:16.898934 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-07 00:51:16.898938 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:00.292) 0:00:52.212 ***** 2026-01-07 00:51:16.898941 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898945 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898948 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898952 | orchestrator | 2026-01-07 00:51:16.898955 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-07 00:51:16.898959 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:00.435) 0:00:52.648 ***** 2026-01-07 00:51:16.898962 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898966 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898969 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898973 | orchestrator | 2026-01-07 00:51:16.898977 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-07 00:51:16.898980 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:00.277) 0:00:52.925 ***** 2026-01-07 00:51:16.898984 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.898987 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.898991 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.898994 | orchestrator | 2026-01-07 00:51:16.898998 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-07 00:51:16.899001 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:00.268) 0:00:53.193 ***** 2026-01-07 00:51:16.899005 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.899008 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.899012 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.899016 | orchestrator | 2026-01-07 00:51:16.899019 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:51:16.899023 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:00.291) 0:00:53.485 ***** 2026-01-07 00:51:16.899031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899104 | orchestrator | 2026-01-07 00:51:16.899110 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:51:16.899115 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:02.420) 0:00:55.905 ***** 2026-01-07 00:51:16.899140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899203 | orchestrator | 2026-01-07 00:51:16.899206 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-07 00:51:16.899210 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:04.768) 0:01:00.674 ***** 2026-01-07 00:51:16.899215 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-07 00:51:16.899218 | orchestrator | 2026-01-07 00:51:16.899221 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-07 00:51:16.899224 | orchestrator | Wednesday 07 January 2026 00:48:59 +0000 (0:00:00.487) 0:01:01.161 ***** 2026-01-07 00:51:16.899228 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899231 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899234 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899237 | orchestrator | 2026-01-07 00:51:16.899240 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-07 00:51:16.899243 | orchestrator | Wednesday 07 January 2026 00:49:00 +0000 (0:00:00.869) 0:01:02.031 ***** 2026-01-07 00:51:16.899246 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899254 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899257 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899260 | orchestrator | 2026-01-07 00:51:16.899264 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-07 00:51:16.899269 | orchestrator | Wednesday 07 January 2026 00:49:01 +0000 (0:00:01.486) 0:01:03.518 ***** 2026-01-07 00:51:16.899274 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899279 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899284 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899289 | orchestrator | 2026-01-07 00:51:16.899294 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-07 00:51:16.899299 | orchestrator | Wednesday 07 January 2026 00:49:03 +0000 (0:00:01.604) 0:01:05.122 ***** 2026-01-07 00:51:16.899307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899419 | orchestrator | 2026-01-07 00:51:16.899424 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-07 00:51:16.899442 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:04.285) 0:01:09.408 ***** 2026-01-07 00:51:16.899447 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:51:16.899452 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899457 | orchestrator | } 2026-01-07 00:51:16.899462 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:51:16.899467 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899472 | orchestrator | } 2026-01-07 00:51:16.899476 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:51:16.899481 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899486 | orchestrator | } 2026-01-07 00:51:16.899491 | orchestrator | 2026-01-07 00:51:16.899496 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:51:16.899505 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:00.399) 0:01:09.807 ***** 2026-01-07 00:51:16.899537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.899595 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.899621 | orchestrator | 2026-01-07 00:51:16.899627 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-07 00:51:16.899632 | orchestrator | Wednesday 07 January 2026 00:49:09 +0000 (0:00:01.839) 0:01:11.647 ***** 2026-01-07 00:51:16.899637 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-07 00:51:16.899642 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-07 00:51:16.899647 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-07 00:51:16.899652 | orchestrator | 2026-01-07 00:51:16.899657 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-07 00:51:16.899661 | orchestrator | Wednesday 07 January 2026 00:49:10 +0000 (0:00:00.944) 0:01:12.591 ***** 2026-01-07 00:51:16.899666 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:51:16.899671 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899676 | orchestrator | } 2026-01-07 00:51:16.899681 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:51:16.899686 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899692 | orchestrator | } 2026-01-07 00:51:16.899697 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:51:16.899702 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.899710 | orchestrator | } 2026-01-07 00:51:16.899716 | orchestrator | 2026-01-07 00:51:16.899721 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.899726 | orchestrator | Wednesday 07 January 2026 00:49:11 +0000 (0:00:00.558) 0:01:13.149 ***** 2026-01-07 00:51:16.899731 | orchestrator | 2026-01-07 00:51:16.899736 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.899741 | orchestrator | Wednesday 07 January 2026 00:49:11 +0000 (0:00:00.060) 0:01:13.210 ***** 2026-01-07 00:51:16.899745 | orchestrator | 2026-01-07 00:51:16.899750 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.899756 | orchestrator | Wednesday 07 January 2026 00:49:11 +0000 (0:00:00.056) 0:01:13.267 ***** 2026-01-07 00:51:16.899761 | orchestrator | 2026-01-07 00:51:16.899766 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:51:16.899770 | orchestrator | Wednesday 07 January 2026 00:49:11 +0000 (0:00:00.058) 0:01:13.325 ***** 2026-01-07 00:51:16.899775 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899780 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899785 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899795 | orchestrator | 2026-01-07 00:51:16.899800 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:51:16.899805 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:13.674) 0:01:27.000 ***** 2026-01-07 00:51:16.899810 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899815 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899820 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899826 | orchestrator | 2026-01-07 00:51:16.899831 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-07 00:51:16.899836 | orchestrator | Wednesday 07 January 2026 00:49:38 +0000 (0:00:13.789) 0:01:40.789 ***** 2026-01-07 00:51:16.899841 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-07 00:51:16.899846 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-07 00:51:16.899851 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-07 00:51:16.899856 | orchestrator | 2026-01-07 00:51:16.899861 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-07 00:51:16.899867 | orchestrator | Wednesday 07 January 2026 00:49:54 +0000 (0:00:15.996) 0:01:56.785 ***** 2026-01-07 00:51:16.899872 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899877 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.899882 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.899887 | orchestrator | 2026-01-07 00:51:16.899892 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:51:16.899897 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:08.291) 0:02:05.076 ***** 2026-01-07 00:51:16.899903 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.899908 | orchestrator | 2026-01-07 00:51:16.899913 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:51:16.899918 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:00.108) 0:02:05.185 ***** 2026-01-07 00:51:16.899924 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.899929 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.899934 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.899939 | orchestrator | 2026-01-07 00:51:16.899945 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:51:16.899950 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:00.696) 0:02:05.882 ***** 2026-01-07 00:51:16.899955 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.899960 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.899965 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.899970 | orchestrator | 2026-01-07 00:51:16.899975 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:51:16.899980 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.612) 0:02:06.494 ***** 2026-01-07 00:51:16.899986 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.899991 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.899996 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900001 | orchestrator | 2026-01-07 00:51:16.900010 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:51:16.900015 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.814) 0:02:07.309 ***** 2026-01-07 00:51:16.900021 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.900026 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.900031 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.900036 | orchestrator | 2026-01-07 00:51:16.900042 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:51:16.900047 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.578) 0:02:07.887 ***** 2026-01-07 00:51:16.900052 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900057 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900062 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900067 | orchestrator | 2026-01-07 00:51:16.900072 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:51:16.900078 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:00.662) 0:02:08.549 ***** 2026-01-07 00:51:16.900087 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900092 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900097 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900103 | orchestrator | 2026-01-07 00:51:16.900108 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-07 00:51:16.900113 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:00.666) 0:02:09.216 ***** 2026-01-07 00:51:16.900119 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-07 00:51:16.900124 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-07 00:51:16.900129 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-07 00:51:16.900135 | orchestrator | 2026-01-07 00:51:16.900140 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-07 00:51:16.900145 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.868) 0:02:10.084 ***** 2026-01-07 00:51:16.900151 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900156 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900161 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900166 | orchestrator | 2026-01-07 00:51:16.900171 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:51:16.900180 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:02026-01-07 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:16.900185 | orchestrator | 0:00.291) 0:02:10.375 ***** 2026-01-07 00:51:16.900190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900206 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900218 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900229 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900235 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900250 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900273 | orchestrator | 2026-01-07 00:51:16.900279 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:51:16.900284 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:03.685) 0:02:14.061 ***** 2026-01-07 00:51:16.900293 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900311 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900338 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900382 | orchestrator | 2026-01-07 00:51:16.900387 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-07 00:51:16.900393 | orchestrator | Wednesday 07 January 2026 00:50:17 +0000 (0:00:05.738) 0:02:19.799 ***** 2026-01-07 00:51:16.900398 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-07 00:51:16.900404 | orchestrator | 2026-01-07 00:51:16.900409 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-07 00:51:16.900414 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.653) 0:02:20.453 ***** 2026-01-07 00:51:16.900419 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900424 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900430 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900435 | orchestrator | 2026-01-07 00:51:16.900440 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-07 00:51:16.900445 | orchestrator | Wednesday 07 January 2026 00:50:19 +0000 (0:00:00.688) 0:02:21.142 ***** 2026-01-07 00:51:16.900451 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900456 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900461 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900466 | orchestrator | 2026-01-07 00:51:16.900471 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-07 00:51:16.900477 | orchestrator | Wednesday 07 January 2026 00:50:20 +0000 (0:00:01.464) 0:02:22.607 ***** 2026-01-07 00:51:16.900482 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.900487 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.900492 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.900497 | orchestrator | 2026-01-07 00:51:16.900502 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-07 00:51:16.900511 | orchestrator | Wednesday 07 January 2026 00:50:22 +0000 (0:00:01.653) 0:02:24.261 ***** 2026-01-07 00:51:16.900516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900530 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900551 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900594 | orchestrator | 2026-01-07 00:51:16.900633 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-07 00:51:16.900641 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:04.746) 0:02:29.007 ***** 2026-01-07 00:51:16.900647 | orchestrator | ok: [testbed-node-0] => { 2026-01-07 00:51:16.900652 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900657 | orchestrator | } 2026-01-07 00:51:16.900662 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:51:16.900667 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900672 | orchestrator | } 2026-01-07 00:51:16.900676 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:51:16.900681 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900687 | orchestrator | } 2026-01-07 00:51:16.900692 | orchestrator | 2026-01-07 00:51:16.900697 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:51:16.900703 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:00.324) 0:02:29.331 ***** 2026-01-07 00:51:16.900712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:51:16.900777 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:51:16.900783 | orchestrator | 2026-01-07 00:51:16.900788 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-07 00:51:16.900793 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:01.842) 0:02:31.174 ***** 2026-01-07 00:51:16.900799 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-07 00:51:16.900804 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-07 00:51:16.900809 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-07 00:51:16.900814 | orchestrator | 2026-01-07 00:51:16.900819 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-07 00:51:16.900824 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:01.096) 0:02:32.270 ***** 2026-01-07 00:51:16.900830 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:51:16.900836 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900841 | orchestrator | } 2026-01-07 00:51:16.900846 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:51:16.900851 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900856 | orchestrator | } 2026-01-07 00:51:16.900861 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:51:16.900867 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:51:16.900872 | orchestrator | } 2026-01-07 00:51:16.900877 | orchestrator | 2026-01-07 00:51:16.900882 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.900887 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.508) 0:02:32.778 ***** 2026-01-07 00:51:16.900892 | orchestrator | 2026-01-07 00:51:16.900897 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.900902 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.060) 0:02:32.839 ***** 2026-01-07 00:51:16.900907 | orchestrator | 2026-01-07 00:51:16.900913 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:51:16.900918 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.057) 0:02:32.897 ***** 2026-01-07 00:51:16.900923 | orchestrator | 2026-01-07 00:51:16.900928 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:51:16.900933 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.061) 0:02:32.959 ***** 2026-01-07 00:51:16.900938 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.900943 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.900949 | orchestrator | 2026-01-07 00:51:16.900954 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:51:16.900962 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:11.732) 0:02:44.691 ***** 2026-01-07 00:51:16.900967 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:16.900972 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:16.900977 | orchestrator | 2026-01-07 00:51:16.900982 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-07 00:51:16.900987 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:13.159) 0:02:57.850 ***** 2026-01-07 00:51:16.900992 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-07 00:51:16.900997 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-07 00:51:16.901007 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-07 00:51:16.901012 | orchestrator | 2026-01-07 00:51:16.901017 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:51:16.901022 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:12.881) 0:03:10.732 ***** 2026-01-07 00:51:16.901027 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:16.901032 | orchestrator | 2026-01-07 00:51:16.901038 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:51:16.901043 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.118) 0:03:10.850 ***** 2026-01-07 00:51:16.901048 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.901053 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.901058 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.901063 | orchestrator | 2026-01-07 00:51:16.901068 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:51:16.901073 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.765) 0:03:11.615 ***** 2026-01-07 00:51:16.901078 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.901083 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.901089 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.901094 | orchestrator | 2026-01-07 00:51:16.901099 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:51:16.901104 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:00.543) 0:03:12.159 ***** 2026-01-07 00:51:16.901109 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.901114 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.901119 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.901124 | orchestrator | 2026-01-07 00:51:16.901133 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:51:16.901138 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.942) 0:03:13.101 ***** 2026-01-07 00:51:16.901143 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:16.901148 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:16.901153 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:16.901159 | orchestrator | 2026-01-07 00:51:16.901164 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:51:16.901169 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:00.706) 0:03:13.808 ***** 2026-01-07 00:51:16.901174 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.901180 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.901185 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.901190 | orchestrator | 2026-01-07 00:51:16.901195 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:51:16.901200 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.752) 0:03:14.560 ***** 2026-01-07 00:51:16.901205 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:16.901210 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:16.901215 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:16.901221 | orchestrator | 2026-01-07 00:51:16.901226 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-07 00:51:16.901231 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.951) 0:03:15.512 ***** 2026-01-07 00:51:16.901236 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-07 00:51:16.901241 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-07 00:51:16.901246 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-07 00:51:16.901252 | orchestrator | 2026-01-07 00:51:16.901257 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:16.901262 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-07 00:51:16.901268 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-07 00:51:16.901273 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-07 00:51:16.901284 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:51:16.901289 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:51:16.901294 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:51:16.901300 | orchestrator | 2026-01-07 00:51:16.901305 | orchestrator | 2026-01-07 00:51:16.901310 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:16.901315 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:01.091) 0:03:16.603 ***** 2026-01-07 00:51:16.901320 | orchestrator | =============================================================================== 2026-01-07 00:51:16.901325 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 28.88s 2026-01-07 00:51:16.901329 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 26.95s 2026-01-07 00:51:16.901335 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 25.41s 2026-01-07 00:51:16.901344 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.00s 2026-01-07 00:51:16.901349 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.29s 2026-01-07 00:51:16.901354 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.74s 2026-01-07 00:51:16.901359 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 5.09s 2026-01-07 00:51:16.901364 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.77s 2026-01-07 00:51:16.901369 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.75s 2026-01-07 00:51:16.901374 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.29s 2026-01-07 00:51:16.901379 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.69s 2026-01-07 00:51:16.901384 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.78s 2026-01-07 00:51:16.901389 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.42s 2026-01-07 00:51:16.901394 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.96s 2026-01-07 00:51:16.901399 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.84s 2026-01-07 00:51:16.901404 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.84s 2026-01-07 00:51:16.901410 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.80s 2026-01-07 00:51:16.901415 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.76s 2026-01-07 00:51:16.901420 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.65s 2026-01-07 00:51:16.901425 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.60s 2026-01-07 00:51:19.943540 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:19.945637 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:19.948741 | orchestrator | 2026-01-07 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:22.988482 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:22.989080 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:22.989155 | orchestrator | 2026-01-07 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:26.036246 | orchestrator | 2026-01-07 00:51:26 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:26.037473 | orchestrator | 2026-01-07 00:51:26 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:26.037517 | orchestrator | 2026-01-07 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:29.078745 | orchestrator | 2026-01-07 00:51:29 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:29.078919 | orchestrator | 2026-01-07 00:51:29 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:29.079553 | orchestrator | 2026-01-07 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:32.119406 | orchestrator | 2026-01-07 00:51:32 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:32.122586 | orchestrator | 2026-01-07 00:51:32 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:32.123185 | orchestrator | 2026-01-07 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:35.164223 | orchestrator | 2026-01-07 00:51:35 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:35.165985 | orchestrator | 2026-01-07 00:51:35 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:35.166092 | orchestrator | 2026-01-07 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:38.213238 | orchestrator | 2026-01-07 00:51:38 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:38.215455 | orchestrator | 2026-01-07 00:51:38 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:38.215746 | orchestrator | 2026-01-07 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:41.249008 | orchestrator | 2026-01-07 00:51:41 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:41.251715 | orchestrator | 2026-01-07 00:51:41 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:41.251771 | orchestrator | 2026-01-07 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:44.290198 | orchestrator | 2026-01-07 00:51:44 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:44.291172 | orchestrator | 2026-01-07 00:51:44 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:44.291207 | orchestrator | 2026-01-07 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:47.332431 | orchestrator | 2026-01-07 00:51:47 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:47.333340 | orchestrator | 2026-01-07 00:51:47 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:47.333364 | orchestrator | 2026-01-07 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:50.381192 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:50.382206 | orchestrator | 2026-01-07 00:51:50 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:50.382255 | orchestrator | 2026-01-07 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:53.420340 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:53.420678 | orchestrator | 2026-01-07 00:51:53 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:53.420717 | orchestrator | 2026-01-07 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:56.478858 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:56.483173 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:56.483216 | orchestrator | 2026-01-07 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:59.536343 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:51:59.537730 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:51:59.538003 | orchestrator | 2026-01-07 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:02.585848 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:02.586771 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:02.586810 | orchestrator | 2026-01-07 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:05.639665 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:05.641644 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:05.641779 | orchestrator | 2026-01-07 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:08.687041 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:08.688197 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:08.688516 | orchestrator | 2026-01-07 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:11.731165 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:11.733029 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:11.733742 | orchestrator | 2026-01-07 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:14.774811 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:14.776319 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:14.776381 | orchestrator | 2026-01-07 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:17.821742 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:17.823075 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:17.823105 | orchestrator | 2026-01-07 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:20.876765 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:20.876862 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:20.876876 | orchestrator | 2026-01-07 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:23.913959 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:23.916352 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:23.916397 | orchestrator | 2026-01-07 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:26.948140 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:26.949809 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:26.949900 | orchestrator | 2026-01-07 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:29.992875 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:29.994895 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:29.994969 | orchestrator | 2026-01-07 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:33.033578 | orchestrator | 2026-01-07 00:52:33 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:33.033799 | orchestrator | 2026-01-07 00:52:33 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:33.033871 | orchestrator | 2026-01-07 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:36.069943 | orchestrator | 2026-01-07 00:52:36 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:36.071018 | orchestrator | 2026-01-07 00:52:36 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:36.071058 | orchestrator | 2026-01-07 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:39.114544 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:39.116349 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:39.116456 | orchestrator | 2026-01-07 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:42.164719 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:42.165756 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:42.165803 | orchestrator | 2026-01-07 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:45.200143 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:45.202666 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:45.203491 | orchestrator | 2026-01-07 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:48.251116 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:48.252597 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:48.252636 | orchestrator | 2026-01-07 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:51.290328 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:51.290429 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:51.290440 | orchestrator | 2026-01-07 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:54.337317 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:54.338126 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:54.340193 | orchestrator | 2026-01-07 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:57.395326 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:52:57.398946 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state STARTED 2026-01-07 00:52:57.399016 | orchestrator | 2026-01-07 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:00.448569 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:00.452139 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:00.455952 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:00.465967 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 2a357146-dedc-4329-b436-2433a197cfec is in state SUCCESS 2026-01-07 00:53:00.467453 | orchestrator | 2026-01-07 00:53:00.467488 | orchestrator | 2026-01-07 00:53:00.467493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:53:00.467498 | orchestrator | 2026-01-07 00:53:00.467503 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:53:00.467507 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.420) 0:00:00.420 ***** 2026-01-07 00:53:00.467512 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.467530 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.467534 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.467538 | orchestrator | 2026-01-07 00:53:00.467542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:53:00.467546 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.337) 0:00:00.757 ***** 2026-01-07 00:53:00.467551 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-07 00:53:00.467555 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-07 00:53:00.467559 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-07 00:53:00.467563 | orchestrator | 2026-01-07 00:53:00.467567 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-07 00:53:00.467571 | orchestrator | 2026-01-07 00:53:00.467574 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:53:00.467578 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.715) 0:00:01.473 ***** 2026-01-07 00:53:00.467582 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.467587 | orchestrator | 2026-01-07 00:53:00.467590 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-07 00:53:00.467594 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:01.008) 0:00:02.482 ***** 2026-01-07 00:53:00.467598 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.467602 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.467606 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.467610 | orchestrator | 2026-01-07 00:53:00.467614 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 00:53:00.467618 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:01.879) 0:00:04.361 ***** 2026-01-07 00:53:00.467622 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.467626 | orchestrator | 2026-01-07 00:53:00.467630 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-07 00:53:00.467634 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:01.477) 0:00:05.841 ***** 2026-01-07 00:53:00.467656 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.467660 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.467664 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.467668 | orchestrator | 2026-01-07 00:53:00.467672 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-07 00:53:00.467676 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:00.755) 0:00:06.597 ***** 2026-01-07 00:53:00.467680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467687 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467821 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467829 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:53:00.467833 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:53:00.467837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:53:00.467841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:53:00.467845 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:53:00.467849 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:53:00.467852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:53:00.467856 | orchestrator | 2026-01-07 00:53:00.467860 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:53:00.467864 | orchestrator | Wednesday 07 January 2026 00:46:57 +0000 (0:00:03.752) 0:00:10.349 ***** 2026-01-07 00:53:00.467868 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:53:00.467872 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:53:00.467876 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:53:00.467880 | orchestrator | 2026-01-07 00:53:00.467884 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:53:00.467888 | orchestrator | Wednesday 07 January 2026 00:46:58 +0000 (0:00:00.795) 0:00:11.145 ***** 2026-01-07 00:53:00.467891 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:53:00.467895 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:53:00.467899 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:53:00.467903 | orchestrator | 2026-01-07 00:53:00.467907 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:53:00.467911 | orchestrator | Wednesday 07 January 2026 00:46:59 +0000 (0:00:01.526) 0:00:12.671 ***** 2026-01-07 00:53:00.467914 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-07 00:53:00.467918 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.467930 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-07 00:53:00.467933 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.467937 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-07 00:53:00.467941 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.467945 | orchestrator | 2026-01-07 00:53:00.467949 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-07 00:53:00.467953 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.653) 0:00:13.324 ***** 2026-01-07 00:53:00.467958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.467971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468046 | orchestrator | 2026-01-07 00:53:00.468050 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-07 00:53:00.468054 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:02.230) 0:00:15.555 ***** 2026-01-07 00:53:00.468058 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.468062 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.468066 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.468069 | orchestrator | 2026-01-07 00:53:00.468073 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-07 00:53:00.468077 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:01.221) 0:00:16.777 ***** 2026-01-07 00:53:00.468081 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-07 00:53:00.468085 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-07 00:53:00.468089 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-07 00:53:00.468093 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-07 00:53:00.468096 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-07 00:53:00.468100 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-07 00:53:00.468104 | orchestrator | 2026-01-07 00:53:00.468108 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-07 00:53:00.468111 | orchestrator | Wednesday 07 January 2026 00:47:05 +0000 (0:00:01.772) 0:00:18.549 ***** 2026-01-07 00:53:00.468115 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.468119 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.468123 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.468126 | orchestrator | 2026-01-07 00:53:00.468130 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-07 00:53:00.468134 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:01.228) 0:00:19.778 ***** 2026-01-07 00:53:00.468138 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.468142 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.468145 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.468149 | orchestrator | 2026-01-07 00:53:00.468153 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-07 00:53:00.468157 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:01.931) 0:00:21.709 ***** 2026-01-07 00:53:00.468163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468189 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468215 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468239 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468243 | orchestrator | 2026-01-07 00:53:00.468247 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-07 00:53:00.468251 | orchestrator | Wednesday 07 January 2026 00:47:10 +0000 (0:00:01.185) 0:00:22.894 ***** 2026-01-07 00:53:00.468255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773', '__omit_place_holder__e2ef238678100f87eca6377434c851f125fdd773'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:53:00.468341 | orchestrator | 2026-01-07 00:53:00.468345 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-07 00:53:00.468349 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:03.212) 0:00:26.107 ***** 2026-01-07 00:53:00.468377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468428 | orchestrator | 2026-01-07 00:53:00.468432 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-07 00:53:00.468436 | orchestrator | Wednesday 07 January 2026 00:47:17 +0000 (0:00:03.945) 0:00:30.052 ***** 2026-01-07 00:53:00.468439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:53:00.468444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:53:00.468451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:53:00.468454 | orchestrator | 2026-01-07 00:53:00.468458 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-07 00:53:00.468463 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:02.526) 0:00:32.579 ***** 2026-01-07 00:53:00.468468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:53:00.468472 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:53:00.468477 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:53:00.468481 | orchestrator | 2026-01-07 00:53:00.468492 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-07 00:53:00.468496 | orchestrator | Wednesday 07 January 2026 00:47:25 +0000 (0:00:05.757) 0:00:38.336 ***** 2026-01-07 00:53:00.468501 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468505 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468510 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468514 | orchestrator | 2026-01-07 00:53:00.468519 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-07 00:53:00.468523 | orchestrator | Wednesday 07 January 2026 00:47:26 +0000 (0:00:00.565) 0:00:38.902 ***** 2026-01-07 00:53:00.468528 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:53:00.468533 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:53:00.468538 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:53:00.468542 | orchestrator | 2026-01-07 00:53:00.468547 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-07 00:53:00.468551 | orchestrator | Wednesday 07 January 2026 00:47:28 +0000 (0:00:02.072) 0:00:40.975 ***** 2026-01-07 00:53:00.468556 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:53:00.468560 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:53:00.468565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:53:00.468573 | orchestrator | 2026-01-07 00:53:00.468577 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:53:00.468581 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:02.021) 0:00:42.996 ***** 2026-01-07 00:53:00.468586 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.468590 | orchestrator | 2026-01-07 00:53:00.468595 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-07 00:53:00.468599 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.627) 0:00:43.624 ***** 2026-01-07 00:53:00.468604 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-07 00:53:00.468608 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-07 00:53:00.468613 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-07 00:53:00.468618 | orchestrator | 2026-01-07 00:53:00.468622 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-07 00:53:00.468627 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:02.063) 0:00:45.688 ***** 2026-01-07 00:53:00.468632 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-07 00:53:00.468636 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-07 00:53:00.468641 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-07 00:53:00.468645 | orchestrator | 2026-01-07 00:53:00.468650 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-07 00:53:00.468654 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:02.289) 0:00:47.977 ***** 2026-01-07 00:53:00.468658 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468663 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468667 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468672 | orchestrator | 2026-01-07 00:53:00.468676 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-07 00:53:00.468681 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:00.268) 0:00:48.246 ***** 2026-01-07 00:53:00.468685 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468689 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468694 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468698 | orchestrator | 2026-01-07 00:53:00.468702 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 00:53:00.468707 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:00.280) 0:00:48.526 ***** 2026-01-07 00:53:00.468714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.468751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.468773 | orchestrator | 2026-01-07 00:53:00.468778 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 00:53:00.468782 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:03.428) 0:00:51.954 ***** 2026-01-07 00:53:00.468787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468800 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468822 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468847 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468851 | orchestrator | 2026-01-07 00:53:00.468855 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 00:53:00.468859 | orchestrator | Wednesday 07 January 2026 00:47:40 +0000 (0:00:00.994) 0:00:52.948 ***** 2026-01-07 00:53:00.468863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468877 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468899 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.468903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.468907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.468911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.468915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.468918 | orchestrator | 2026-01-07 00:53:00.468922 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-07 00:53:00.468926 | orchestrator | Wednesday 07 January 2026 00:47:41 +0000 (0:00:01.076) 0:00:54.024 ***** 2026-01-07 00:53:00.468933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:53:00.468940 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:53:00.468944 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:53:00.468948 | orchestrator | 2026-01-07 00:53:00.468952 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-07 00:53:00.468955 | orchestrator | Wednesday 07 January 2026 00:47:42 +0000 (0:00:01.462) 0:00:55.487 ***** 2026-01-07 00:53:00.468959 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:53:00.468965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:53:00.468969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:53:00.468973 | orchestrator | 2026-01-07 00:53:00.468977 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-07 00:53:00.468980 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:01.811) 0:00:57.299 ***** 2026-01-07 00:53:00.468984 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:53:00.468988 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:53:00.468992 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:53:00.468995 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.468999 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:53:00.469003 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:53:00.469007 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.469011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:53:00.469014 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.469018 | orchestrator | 2026-01-07 00:53:00.469022 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-07 00:53:00.469026 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:01.409) 0:00:58.709 ***** 2026-01-07 00:53:00.469030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.469065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.469069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.469073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.469080 | orchestrator | 2026-01-07 00:53:00.469084 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-07 00:53:00.469088 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:03.592) 0:01:02.301 ***** 2026-01-07 00:53:00.469092 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:53:00.469096 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.469100 | orchestrator | } 2026-01-07 00:53:00.469104 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:53:00.469107 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.469111 | orchestrator | } 2026-01-07 00:53:00.469115 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:53:00.469119 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.469122 | orchestrator | } 2026-01-07 00:53:00.469126 | orchestrator | 2026-01-07 00:53:00.469130 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:53:00.469134 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:00.307) 0:01:02.609 ***** 2026-01-07 00:53:00.469141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.469149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.469154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.469157 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.478223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.478332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.478394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.478401 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.478407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.478428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.478432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.478436 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.478440 | orchestrator | 2026-01-07 00:53:00.478445 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-07 00:53:00.478451 | orchestrator | Wednesday 07 January 2026 00:47:50 +0000 (0:00:01.093) 0:01:03.702 ***** 2026-01-07 00:53:00.478455 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.478459 | orchestrator | 2026-01-07 00:53:00.478463 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-07 00:53:00.478467 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:00.524) 0:01:04.227 ***** 2026-01-07 00:53:00.478487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.478498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.478510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.478535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478562 | orchestrator | 2026-01-07 00:53:00.478566 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-07 00:53:00.478571 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:03.374) 0:01:07.601 ***** 2026-01-07 00:53:00.478580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.478588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478603 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.478607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.478611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.478627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.478637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478645 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.478649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478656 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.478660 | orchestrator | 2026-01-07 00:53:00.478664 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-07 00:53:00.478668 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.712) 0:01:08.313 ***** 2026-01-07 00:53:00.478676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478711 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.478718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478730 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.478736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.478748 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.478754 | orchestrator | 2026-01-07 00:53:00.478760 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-07 00:53:00.478766 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:00.900) 0:01:09.214 ***** 2026-01-07 00:53:00.478772 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.478778 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.478783 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.478790 | orchestrator | 2026-01-07 00:53:00.478796 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-07 00:53:00.478802 | orchestrator | Wednesday 07 January 2026 00:47:57 +0000 (0:00:01.473) 0:01:10.687 ***** 2026-01-07 00:53:00.478808 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.478814 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.478820 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.478835 | orchestrator | 2026-01-07 00:53:00.478841 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-07 00:53:00.478844 | orchestrator | Wednesday 07 January 2026 00:47:59 +0000 (0:00:01.918) 0:01:12.606 ***** 2026-01-07 00:53:00.478852 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.478856 | orchestrator | 2026-01-07 00:53:00.478860 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-07 00:53:00.478863 | orchestrator | Wednesday 07 January 2026 00:48:00 +0000 (0:00:00.670) 0:01:13.276 ***** 2026-01-07 00:53:00.478868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.478961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.478978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.478983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.479056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479068 | orchestrator | 2026-01-07 00:53:00.479074 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-07 00:53:00.479080 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:04.070) 0:01:17.347 ***** 2026-01-07 00:53:00.479091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479129 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479141 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479171 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479175 | orchestrator | 2026-01-07 00:53:00.479179 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-07 00:53:00.479183 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:00.889) 0:01:18.236 ***** 2026-01-07 00:53:00.479188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479208 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479214 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.479243 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479249 | orchestrator | 2026-01-07 00:53:00.479255 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-07 00:53:00.479261 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:01.123) 0:01:19.359 ***** 2026-01-07 00:53:00.479267 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.479273 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.479282 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.479288 | orchestrator | 2026-01-07 00:53:00.479293 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-07 00:53:00.479299 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:01.343) 0:01:20.703 ***** 2026-01-07 00:53:00.479304 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.479309 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.479315 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.479321 | orchestrator | 2026-01-07 00:53:00.479328 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-07 00:53:00.479333 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:01.944) 0:01:22.647 ***** 2026-01-07 00:53:00.479338 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479344 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479349 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479486 | orchestrator | 2026-01-07 00:53:00.479500 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-07 00:53:00.479504 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:00.254) 0:01:22.901 ***** 2026-01-07 00:53:00.479509 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.479512 | orchestrator | 2026-01-07 00:53:00.479517 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-07 00:53:00.479520 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:00.726) 0:01:23.628 ***** 2026-01-07 00:53:00.479525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:53:00.479540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:53:00.479551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:53:00.479555 | orchestrator | 2026-01-07 00:53:00.479561 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-07 00:53:00.479568 | orchestrator | Wednesday 07 January 2026 00:48:13 +0000 (0:00:02.975) 0:01:26.603 ***** 2026-01-07 00:53:00.479580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:53:00.479592 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:53:00.479603 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:53:00.479623 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479634 | orchestrator | 2026-01-07 00:53:00.479639 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-07 00:53:00.479645 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:01.919) 0:01:28.523 ***** 2026-01-07 00:53:00.479652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479667 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479689 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:53:00.479708 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479714 | orchestrator | 2026-01-07 00:53:00.479720 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-07 00:53:00.479726 | orchestrator | Wednesday 07 January 2026 00:48:17 +0000 (0:00:01.875) 0:01:30.399 ***** 2026-01-07 00:53:00.479732 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479737 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479743 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479750 | orchestrator | 2026-01-07 00:53:00.479753 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-07 00:53:00.479757 | orchestrator | Wednesday 07 January 2026 00:48:18 +0000 (0:00:00.508) 0:01:30.907 ***** 2026-01-07 00:53:00.479761 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479765 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479768 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.479772 | orchestrator | 2026-01-07 00:53:00.479781 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-07 00:53:00.479785 | orchestrator | Wednesday 07 January 2026 00:48:19 +0000 (0:00:00.978) 0:01:31.886 ***** 2026-01-07 00:53:00.479789 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.479793 | orchestrator | 2026-01-07 00:53:00.479797 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-07 00:53:00.479804 | orchestrator | Wednesday 07 January 2026 00:48:19 +0000 (0:00:00.662) 0:01:32.548 ***** 2026-01-07 00:53:00.479810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.479816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.479849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.479868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479892 | orchestrator | 2026-01-07 00:53:00.479896 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-07 00:53:00.479900 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:04.175) 0:01:36.723 ***** 2026-01-07 00:53:00.479904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479947 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.479952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479975 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.479982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.479986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.479998 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.480002 | orchestrator | 2026-01-07 00:53:00.480009 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-07 00:53:00.480013 | orchestrator | Wednesday 07 January 2026 00:48:24 +0000 (0:00:00.758) 0:01:37.482 ***** 2026-01-07 00:53:00.480017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480031 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.480035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480043 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.480047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.480055 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.480059 | orchestrator | 2026-01-07 00:53:00.480066 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-07 00:53:00.480070 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:01.191) 0:01:38.673 ***** 2026-01-07 00:53:00.480074 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.480078 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.480082 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.480086 | orchestrator | 2026-01-07 00:53:00.480090 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-07 00:53:00.480093 | orchestrator | Wednesday 07 January 2026 00:48:27 +0000 (0:00:01.242) 0:01:39.916 ***** 2026-01-07 00:53:00.480097 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.480101 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.480105 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.480108 | orchestrator | 2026-01-07 00:53:00.480112 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-07 00:53:00.480116 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:01.839) 0:01:41.755 ***** 2026-01-07 00:53:00.480120 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.480124 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.480127 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.480131 | orchestrator | 2026-01-07 00:53:00.480135 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-07 00:53:00.480139 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:00.287) 0:01:42.042 ***** 2026-01-07 00:53:00.480143 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.480146 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.480150 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.480154 | orchestrator | 2026-01-07 00:53:00.480158 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-07 00:53:00.480162 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:00.278) 0:01:42.321 ***** 2026-01-07 00:53:00.480166 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.480169 | orchestrator | 2026-01-07 00:53:00.480173 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-07 00:53:00.480177 | orchestrator | Wednesday 07 January 2026 00:48:30 +0000 (0:00:00.883) 0:01:43.205 ***** 2026-01-07 00:53:00.480188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.480193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.480234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.480276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480310 | orchestrator | 2026-01-07 00:53:00.480314 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-07 00:53:00.480317 | orchestrator | Wednesday 07 January 2026 00:48:34 +0000 (0:00:03.751) 0:01:46.956 ***** 2026-01-07 00:53:00.480325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.480329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.480418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480430 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.480440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.480477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:53:00.480489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480514 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.480520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.480526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.485309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.485465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.485477 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.485486 | orchestrator | 2026-01-07 00:53:00.485493 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-07 00:53:00.485502 | orchestrator | Wednesday 07 January 2026 00:48:34 +0000 (0:00:00.702) 0:01:47.658 ***** 2026-01-07 00:53:00.485510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485529 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.485535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485566 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.485572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.485584 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.485590 | orchestrator | 2026-01-07 00:53:00.485597 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-07 00:53:00.485603 | orchestrator | Wednesday 07 January 2026 00:48:36 +0000 (0:00:01.227) 0:01:48.886 ***** 2026-01-07 00:53:00.485609 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.485616 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.485622 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.485629 | orchestrator | 2026-01-07 00:53:00.485636 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-07 00:53:00.485643 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:01.231) 0:01:50.117 ***** 2026-01-07 00:53:00.485649 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.485655 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.485661 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.485667 | orchestrator | 2026-01-07 00:53:00.485673 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-07 00:53:00.485685 | orchestrator | Wednesday 07 January 2026 00:48:39 +0000 (0:00:01.905) 0:01:52.022 ***** 2026-01-07 00:53:00.485691 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.485697 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.485704 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.485710 | orchestrator | 2026-01-07 00:53:00.485716 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-07 00:53:00.485722 | orchestrator | Wednesday 07 January 2026 00:48:39 +0000 (0:00:00.303) 0:01:52.326 ***** 2026-01-07 00:53:00.485728 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.485735 | orchestrator | 2026-01-07 00:53:00.485741 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-07 00:53:00.485747 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:00.965) 0:01:53.291 ***** 2026-01-07 00:53:00.485804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:53:00.485819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.485843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:53:00.485854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.485870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:53:00.485880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.485887 | orchestrator | 2026-01-07 00:53:00.485894 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-07 00:53:00.485906 | orchestrator | Wednesday 07 January 2026 00:48:44 +0000 (0:00:03.885) 0:01:57.177 ***** 2026-01-07 00:53:00.485916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:53:00.485932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:53:00.485942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.485955 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.485962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.485970 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:53:00.486077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.486083 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486087 | orchestrator | 2026-01-07 00:53:00.486091 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-07 00:53:00.486095 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:02.547) 0:01:59.724 ***** 2026-01-07 00:53:00.486103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486117 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486136 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:53:00.486148 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486151 | orchestrator | 2026-01-07 00:53:00.486155 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-07 00:53:00.486159 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:03.264) 0:02:02.989 ***** 2026-01-07 00:53:00.486163 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486167 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486171 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486175 | orchestrator | 2026-01-07 00:53:00.486178 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-07 00:53:00.486182 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:01.194) 0:02:04.183 ***** 2026-01-07 00:53:00.486186 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486190 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486194 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486198 | orchestrator | 2026-01-07 00:53:00.486202 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-07 00:53:00.486210 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:01.867) 0:02:06.051 ***** 2026-01-07 00:53:00.486214 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486217 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486221 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486225 | orchestrator | 2026-01-07 00:53:00.486229 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-07 00:53:00.486233 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:00.263) 0:02:06.314 ***** 2026-01-07 00:53:00.486240 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.486244 | orchestrator | 2026-01-07 00:53:00.486248 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-07 00:53:00.486251 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:00.753) 0:02:07.067 ***** 2026-01-07 00:53:00.486256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.486261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.486268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.486272 | orchestrator | 2026-01-07 00:53:00.486276 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-07 00:53:00.486280 | orchestrator | Wednesday 07 January 2026 00:48:57 +0000 (0:00:03.640) 0:02:10.708 ***** 2026-01-07 00:53:00.486284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.486291 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.486304 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.486312 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486316 | orchestrator | 2026-01-07 00:53:00.486319 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-07 00:53:00.486323 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:00.346) 0:02:11.054 ***** 2026-01-07 00:53:00.486327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486337 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486420 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.486439 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486443 | orchestrator | 2026-01-07 00:53:00.486447 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-07 00:53:00.486451 | orchestrator | Wednesday 07 January 2026 00:48:58 +0000 (0:00:00.561) 0:02:11.616 ***** 2026-01-07 00:53:00.486455 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486458 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486462 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486466 | orchestrator | 2026-01-07 00:53:00.486470 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-07 00:53:00.486473 | orchestrator | Wednesday 07 January 2026 00:49:00 +0000 (0:00:01.421) 0:02:13.038 ***** 2026-01-07 00:53:00.486477 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486481 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486485 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486489 | orchestrator | 2026-01-07 00:53:00.486492 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-07 00:53:00.486496 | orchestrator | Wednesday 07 January 2026 00:49:02 +0000 (0:00:01.963) 0:02:15.001 ***** 2026-01-07 00:53:00.486500 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486504 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486507 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486511 | orchestrator | 2026-01-07 00:53:00.486515 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-07 00:53:00.486519 | orchestrator | Wednesday 07 January 2026 00:49:02 +0000 (0:00:00.291) 0:02:15.293 ***** 2026-01-07 00:53:00.486523 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.486529 | orchestrator | 2026-01-07 00:53:00.486535 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-07 00:53:00.486541 | orchestrator | Wednesday 07 January 2026 00:49:03 +0000 (0:00:00.923) 0:02:16.216 ***** 2026-01-07 00:53:00.486557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:53:00.486576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:53:00.486590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:53:00.486601 | orchestrator | 2026-01-07 00:53:00.486608 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-07 00:53:00.486614 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:03.891) 0:02:20.107 ***** 2026-01-07 00:53:00.486627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:53:00.486634 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:53:00.486659 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:53:00.486671 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486675 | orchestrator | 2026-01-07 00:53:00.486679 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-07 00:53:00.486683 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:00.593) 0:02:20.701 ***** 2026-01-07 00:53:00.486688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:53:00.486726 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:53:00.486753 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-07 00:53:00.486772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:53:00.486779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:53:00.486783 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486787 | orchestrator | 2026-01-07 00:53:00.486791 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-07 00:53:00.486795 | orchestrator | Wednesday 07 January 2026 00:49:09 +0000 (0:00:01.142) 0:02:21.844 ***** 2026-01-07 00:53:00.486799 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486803 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486806 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486810 | orchestrator | 2026-01-07 00:53:00.486814 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-07 00:53:00.486818 | orchestrator | Wednesday 07 January 2026 00:49:10 +0000 (0:00:01.440) 0:02:23.285 ***** 2026-01-07 00:53:00.486822 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.486825 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.486829 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.486833 | orchestrator | 2026-01-07 00:53:00.486837 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-07 00:53:00.486840 | orchestrator | Wednesday 07 January 2026 00:49:12 +0000 (0:00:02.273) 0:02:25.558 ***** 2026-01-07 00:53:00.486844 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486848 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486852 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486856 | orchestrator | 2026-01-07 00:53:00.486859 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-07 00:53:00.486863 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:00.442) 0:02:26.000 ***** 2026-01-07 00:53:00.486867 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.486871 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.486875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.486878 | orchestrator | 2026-01-07 00:53:00.486882 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-07 00:53:00.486886 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:00.481) 0:02:26.482 ***** 2026-01-07 00:53:00.486890 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.486894 | orchestrator | 2026-01-07 00:53:00.486897 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-07 00:53:00.486901 | orchestrator | Wednesday 07 January 2026 00:49:14 +0000 (0:00:01.164) 0:02:27.646 ***** 2026-01-07 00:53:00.486908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:53:00.486918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.486923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.486983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:53:00.486992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.486999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.487009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:53:00.487021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.487032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.487038 | orchestrator | 2026-01-07 00:53:00.487044 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-07 00:53:00.487050 | orchestrator | Wednesday 07 January 2026 00:49:19 +0000 (0:00:04.432) 0:02:32.078 ***** 2026-01-07 00:53:00.487056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:53:00.487063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.487078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.487085 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:53:00.487104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.487112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.487117 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:53:00.487132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:53:00.487137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:53:00.487141 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487144 | orchestrator | 2026-01-07 00:53:00.487148 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-07 00:53:00.487152 | orchestrator | Wednesday 07 January 2026 00:49:19 +0000 (0:00:00.504) 0:02:32.582 ***** 2026-01-07 00:53:00.487156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487169 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487181 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-07 00:53:00.487192 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487196 | orchestrator | 2026-01-07 00:53:00.487203 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-07 00:53:00.487207 | orchestrator | Wednesday 07 January 2026 00:49:20 +0000 (0:00:00.816) 0:02:33.399 ***** 2026-01-07 00:53:00.487210 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487214 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487218 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487222 | orchestrator | 2026-01-07 00:53:00.487226 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-07 00:53:00.487229 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:01.021) 0:02:34.420 ***** 2026-01-07 00:53:00.487233 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487237 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487240 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487244 | orchestrator | 2026-01-07 00:53:00.487248 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-07 00:53:00.487252 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:01.708) 0:02:36.128 ***** 2026-01-07 00:53:00.487255 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487259 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487263 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487267 | orchestrator | 2026-01-07 00:53:00.487270 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-07 00:53:00.487274 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.322) 0:02:36.451 ***** 2026-01-07 00:53:00.487283 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.487287 | orchestrator | 2026-01-07 00:53:00.487291 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-07 00:53:00.487295 | orchestrator | Wednesday 07 January 2026 00:49:24 +0000 (0:00:00.992) 0:02:37.443 ***** 2026-01-07 00:53:00.487299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487335 | orchestrator | 2026-01-07 00:53:00.487340 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-07 00:53:00.487343 | orchestrator | Wednesday 07 January 2026 00:49:28 +0000 (0:00:04.030) 0:02:41.474 ***** 2026-01-07 00:53:00.487352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487379 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487395 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487433 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487439 | orchestrator | 2026-01-07 00:53:00.487446 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-07 00:53:00.487452 | orchestrator | Wednesday 07 January 2026 00:49:29 +0000 (0:00:00.906) 0:02:42.381 ***** 2026-01-07 00:53:00.487458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487513 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487518 | orchestrator | 2026-01-07 00:53:00.487525 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-07 00:53:00.487531 | orchestrator | Wednesday 07 January 2026 00:49:30 +0000 (0:00:01.059) 0:02:43.440 ***** 2026-01-07 00:53:00.487537 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487543 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487549 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487553 | orchestrator | 2026-01-07 00:53:00.487556 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-07 00:53:00.487560 | orchestrator | Wednesday 07 January 2026 00:49:32 +0000 (0:00:01.318) 0:02:44.759 ***** 2026-01-07 00:53:00.487564 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487568 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487572 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487575 | orchestrator | 2026-01-07 00:53:00.487579 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-07 00:53:00.487588 | orchestrator | Wednesday 07 January 2026 00:49:33 +0000 (0:00:01.732) 0:02:46.491 ***** 2026-01-07 00:53:00.487592 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.487596 | orchestrator | 2026-01-07 00:53:00.487599 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-07 00:53:00.487603 | orchestrator | Wednesday 07 January 2026 00:49:34 +0000 (0:00:00.789) 0:02:47.281 ***** 2026-01-07 00:53:00.487617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.487678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487719 | orchestrator | 2026-01-07 00:53:00.487725 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-07 00:53:00.487731 | orchestrator | Wednesday 07 January 2026 00:49:37 +0000 (0:00:03.004) 0:02:50.286 ***** 2026-01-07 00:53:00.487742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487777 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487805 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.487822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.487838 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487842 | orchestrator | 2026-01-07 00:53:00.487846 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-07 00:53:00.487850 | orchestrator | Wednesday 07 January 2026 00:49:38 +0000 (0:00:00.839) 0:02:51.126 ***** 2026-01-07 00:53:00.487854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487862 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.487866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487873 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.487877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.487889 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.487893 | orchestrator | 2026-01-07 00:53:00.487897 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-07 00:53:00.487901 | orchestrator | Wednesday 07 January 2026 00:49:39 +0000 (0:00:00.843) 0:02:51.969 ***** 2026-01-07 00:53:00.487912 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487916 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487920 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487924 | orchestrator | 2026-01-07 00:53:00.487928 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-07 00:53:00.487931 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:01.378) 0:02:53.348 ***** 2026-01-07 00:53:00.487935 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.487939 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.487943 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.487947 | orchestrator | 2026-01-07 00:53:00.487951 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-07 00:53:00.487954 | orchestrator | Wednesday 07 January 2026 00:49:43 +0000 (0:00:02.778) 0:02:56.126 ***** 2026-01-07 00:53:00.487959 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.487964 | orchestrator | 2026-01-07 00:53:00.487970 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-07 00:53:00.487976 | orchestrator | Wednesday 07 January 2026 00:49:45 +0000 (0:00:01.958) 0:02:58.085 ***** 2026-01-07 00:53:00.487983 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:53:00.487989 | orchestrator | 2026-01-07 00:53:00.487994 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-07 00:53:00.488000 | orchestrator | Wednesday 07 January 2026 00:49:49 +0000 (0:00:03.897) 0:03:01.982 ***** 2026-01-07 00:53:00.488013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488031 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488059 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488089 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488095 | orchestrator | 2026-01-07 00:53:00.488102 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-07 00:53:00.488107 | orchestrator | Wednesday 07 January 2026 00:49:51 +0000 (0:00:02.293) 0:03:04.276 ***** 2026-01-07 00:53:00.488116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488129 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488145 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:53:00.488161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:53:00.488165 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488169 | orchestrator | 2026-01-07 00:53:00.488173 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-07 00:53:00.488177 | orchestrator | Wednesday 07 January 2026 00:49:54 +0000 (0:00:02.661) 0:03:06.938 ***** 2026-01-07 00:53:00.488186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488200 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488222 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:53:00.488246 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488252 | orchestrator | 2026-01-07 00:53:00.488258 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-07 00:53:00.488264 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:03.732) 0:03:10.670 ***** 2026-01-07 00:53:00.488271 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.488277 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.488282 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.488289 | orchestrator | 2026-01-07 00:53:00.488293 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-07 00:53:00.488297 | orchestrator | Wednesday 07 January 2026 00:49:59 +0000 (0:00:01.917) 0:03:12.588 ***** 2026-01-07 00:53:00.488301 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488305 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488308 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488312 | orchestrator | 2026-01-07 00:53:00.488316 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-07 00:53:00.488320 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:01.543) 0:03:14.131 ***** 2026-01-07 00:53:00.488323 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488327 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488331 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488335 | orchestrator | 2026-01-07 00:53:00.488339 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-07 00:53:00.488343 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:00.262) 0:03:14.393 ***** 2026-01-07 00:53:00.488347 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.488350 | orchestrator | 2026-01-07 00:53:00.488394 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-07 00:53:00.488399 | orchestrator | Wednesday 07 January 2026 00:50:02 +0000 (0:00:01.137) 0:03:15.531 ***** 2026-01-07 00:53:00.488473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:53:00.488493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:53:00.488503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:53:00.488507 | orchestrator | 2026-01-07 00:53:00.488511 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-07 00:53:00.488515 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:01.240) 0:03:16.772 ***** 2026-01-07 00:53:00.488519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:53:00.488523 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:53:00.488534 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:53:00.488546 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488550 | orchestrator | 2026-01-07 00:53:00.488554 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-07 00:53:00.488558 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.337) 0:03:17.109 ***** 2026-01-07 00:53:00.488562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:53:00.488566 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:53:00.488578 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:53:00.488586 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488590 | orchestrator | 2026-01-07 00:53:00.488594 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-07 00:53:00.488598 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.710) 0:03:17.820 ***** 2026-01-07 00:53:00.488602 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488606 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488609 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488613 | orchestrator | 2026-01-07 00:53:00.488617 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-07 00:53:00.488621 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.389) 0:03:18.210 ***** 2026-01-07 00:53:00.488625 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488628 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488632 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488636 | orchestrator | 2026-01-07 00:53:00.488640 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-07 00:53:00.488644 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:01.082) 0:03:19.292 ***** 2026-01-07 00:53:00.488648 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.488651 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.488655 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.488659 | orchestrator | 2026-01-07 00:53:00.488663 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-07 00:53:00.488667 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:00.265) 0:03:19.557 ***** 2026-01-07 00:53:00.488671 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.488674 | orchestrator | 2026-01-07 00:53:00.488678 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-07 00:53:00.488683 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:01.264) 0:03:20.821 ***** 2026-01-07 00:53:00.488690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.488699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.488713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.488718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.488747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.488760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.488781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.488790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.488800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.488816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.488823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.488832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.488839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.488876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.488883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.488906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.488926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.488935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.488947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.488958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.488965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.488976 | orchestrator | 2026-01-07 00:53:00.488982 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-07 00:53:00.488989 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:05.464) 0:03:26.286 ***** 2026-01-07 00:53:00.488998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.489002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.489013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.489021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.489028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.489052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.489064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.489068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.489098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.489133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.489140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489158 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.489164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.489181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.489210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489216 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.489222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.489234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-07 00:53:00.489254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-07 00:53:00.489261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489287 | orchestrator | skipping: [testbe2026-01-07 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:00.489486 | orchestrator | d-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-07 00:53:00.489582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-07 00:53:00.489622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-07 00:53:00.489628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.489659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:53:00.489668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:53:00.489673 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.489680 | orchestrator | 2026-01-07 00:53:00.489685 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-07 00:53:00.489691 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:02.077) 0:03:28.363 ***** 2026-01-07 00:53:00.489697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489771 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.489782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.489805 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.489809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.489813 | orchestrator | 2026-01-07 00:53:00.489817 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-07 00:53:00.489825 | orchestrator | Wednesday 07 January 2026 00:50:17 +0000 (0:00:01.443) 0:03:29.807 ***** 2026-01-07 00:53:00.489830 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.489833 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.489837 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.489841 | orchestrator | 2026-01-07 00:53:00.489845 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-07 00:53:00.489849 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:01.276) 0:03:31.084 ***** 2026-01-07 00:53:00.489852 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.489856 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.489860 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.489864 | orchestrator | 2026-01-07 00:53:00.489868 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-07 00:53:00.489871 | orchestrator | Wednesday 07 January 2026 00:50:20 +0000 (0:00:02.019) 0:03:33.103 ***** 2026-01-07 00:53:00.489875 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.489879 | orchestrator | 2026-01-07 00:53:00.489883 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-07 00:53:00.489887 | orchestrator | Wednesday 07 January 2026 00:50:21 +0000 (0:00:01.210) 0:03:34.314 ***** 2026-01-07 00:53:00.489898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.489906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.489916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.489925 | orchestrator | 2026-01-07 00:53:00.489929 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-07 00:53:00.489933 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:03.425) 0:03:37.739 ***** 2026-01-07 00:53:00.489941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.489946 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.489952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.489956 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.489963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.489970 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.489974 | orchestrator | 2026-01-07 00:53:00.489978 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-07 00:53:00.489982 | orchestrator | Wednesday 07 January 2026 00:50:25 +0000 (0:00:00.453) 0:03:38.192 ***** 2026-01-07 00:53:00.489986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.489990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.489996 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.490000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.490004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.490008 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.490042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.490053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.490058 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.490061 | orchestrator | 2026-01-07 00:53:00.490066 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-07 00:53:00.490071 | orchestrator | Wednesday 07 January 2026 00:50:26 +0000 (0:00:00.880) 0:03:39.073 ***** 2026-01-07 00:53:00.490075 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.490080 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.490084 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.490089 | orchestrator | 2026-01-07 00:53:00.490093 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-07 00:53:00.490098 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:01.159) 0:03:40.232 ***** 2026-01-07 00:53:00.490102 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.490638 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.490656 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.490660 | orchestrator | 2026-01-07 00:53:00.490665 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-07 00:53:00.490669 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:02.225) 0:03:42.458 ***** 2026-01-07 00:53:00.490673 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.490677 | orchestrator | 2026-01-07 00:53:00.490681 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-07 00:53:00.490685 | orchestrator | Wednesday 07 January 2026 00:50:31 +0000 (0:00:01.326) 0:03:43.785 ***** 2026-01-07 00:53:00.490695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.490842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490850 | orchestrator | 2026-01-07 00:53:00.490854 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-07 00:53:00.490858 | orchestrator | Wednesday 07 January 2026 00:50:37 +0000 (0:00:06.039) 0:03:49.825 ***** 2026-01-07 00:53:00.490874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490898 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.490902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.490925 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.490931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.490946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.493141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.493190 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493196 | orchestrator | 2026-01-07 00:53:00.493201 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-07 00:53:00.493206 | orchestrator | Wednesday 07 January 2026 00:50:37 +0000 (0:00:00.691) 0:03:50.516 ***** 2026-01-07 00:53:00.493214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493244 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493275 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.493324 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493328 | orchestrator | 2026-01-07 00:53:00.493332 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-07 00:53:00.493336 | orchestrator | Wednesday 07 January 2026 00:50:38 +0000 (0:00:00.831) 0:03:51.348 ***** 2026-01-07 00:53:00.493340 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.493344 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.493347 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.493351 | orchestrator | 2026-01-07 00:53:00.493372 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-07 00:53:00.493378 | orchestrator | Wednesday 07 January 2026 00:50:39 +0000 (0:00:01.300) 0:03:52.649 ***** 2026-01-07 00:53:00.493383 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.493389 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.493394 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.493400 | orchestrator | 2026-01-07 00:53:00.493405 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-07 00:53:00.493411 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:01.891) 0:03:54.541 ***** 2026-01-07 00:53:00.493417 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.493423 | orchestrator | 2026-01-07 00:53:00.493429 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-07 00:53:00.493435 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:01.243) 0:03:55.785 ***** 2026-01-07 00:53:00.493441 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-07 00:53:00.493449 | orchestrator | 2026-01-07 00:53:00.493455 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-07 00:53:00.493459 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:01.570) 0:03:57.356 ***** 2026-01-07 00:53:00.493464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:53:00.493472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:53:00.493476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:53:00.493480 | orchestrator | 2026-01-07 00:53:00.493484 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-07 00:53:00.493489 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:04.835) 0:04:02.191 ***** 2026-01-07 00:53:00.493493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493501 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493526 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493537 | orchestrator | 2026-01-07 00:53:00.493541 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-07 00:53:00.493545 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:01.390) 0:04:03.582 ***** 2026-01-07 00:53:00.493550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493559 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493576 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:53:00.493585 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493589 | orchestrator | 2026-01-07 00:53:00.493593 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:53:00.493597 | orchestrator | Wednesday 07 January 2026 00:50:52 +0000 (0:00:01.415) 0:04:04.998 ***** 2026-01-07 00:53:00.493601 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.493607 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.493611 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.493615 | orchestrator | 2026-01-07 00:53:00.493619 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:53:00.493623 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:02.043) 0:04:07.042 ***** 2026-01-07 00:53:00.493626 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.493630 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.493634 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.493638 | orchestrator | 2026-01-07 00:53:00.493642 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-07 00:53:00.493646 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:02.554) 0:04:09.596 ***** 2026-01-07 00:53:00.493651 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-07 00:53:00.493655 | orchestrator | 2026-01-07 00:53:00.493659 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-07 00:53:00.493662 | orchestrator | Wednesday 07 January 2026 00:50:57 +0000 (0:00:00.849) 0:04:10.445 ***** 2026-01-07 00:53:00.493667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493672 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493691 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493698 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493702 | orchestrator | 2026-01-07 00:53:00.493706 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-07 00:53:00.493710 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:02.207) 0:04:12.653 ***** 2026-01-07 00:53:00.493714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493718 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493732 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:53:00.493740 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493744 | orchestrator | 2026-01-07 00:53:00.493748 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-07 00:53:00.493751 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:01.478) 0:04:14.132 ***** 2026-01-07 00:53:00.493755 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493759 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493763 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493766 | orchestrator | 2026-01-07 00:53:00.493770 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:53:00.493774 | orchestrator | Wednesday 07 January 2026 00:51:03 +0000 (0:00:01.749) 0:04:15.881 ***** 2026-01-07 00:53:00.493778 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.493782 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.493786 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.493790 | orchestrator | 2026-01-07 00:53:00.493834 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:53:00.493844 | orchestrator | Wednesday 07 January 2026 00:51:05 +0000 (0:00:01.990) 0:04:17.871 ***** 2026-01-07 00:53:00.493848 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.493852 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.493856 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.493859 | orchestrator | 2026-01-07 00:53:00.493863 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-07 00:53:00.493867 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:02.563) 0:04:20.435 ***** 2026-01-07 00:53:00.493882 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-07 00:53:00.493887 | orchestrator | 2026-01-07 00:53:00.493891 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-07 00:53:00.493894 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.790) 0:04:21.225 ***** 2026-01-07 00:53:00.493898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493902 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493915 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493923 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493927 | orchestrator | 2026-01-07 00:53:00.493931 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-07 00:53:00.493938 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:01.279) 0:04:22.505 ***** 2026-01-07 00:53:00.493946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493954 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.493960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493966 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.493972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:53:00.493979 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.493984 | orchestrator | 2026-01-07 00:53:00.493991 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-07 00:53:00.493997 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:01.078) 0:04:23.584 ***** 2026-01-07 00:53:00.494003 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494008 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494059 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494063 | orchestrator | 2026-01-07 00:53:00.494067 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:53:00.494071 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:01.598) 0:04:25.182 ***** 2026-01-07 00:53:00.494075 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.494079 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.494082 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.494090 | orchestrator | 2026-01-07 00:53:00.494094 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:53:00.494098 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:02.617) 0:04:27.799 ***** 2026-01-07 00:53:00.494102 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.494105 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.494109 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.494113 | orchestrator | 2026-01-07 00:53:00.494117 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-07 00:53:00.494120 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:02.628) 0:04:30.428 ***** 2026-01-07 00:53:00.494124 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.494128 | orchestrator | 2026-01-07 00:53:00.494132 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-07 00:53:00.494135 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:01.412) 0:04:31.841 ***** 2026-01-07 00:53:00.494140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:53:00.494148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:53:00.494157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:53:00.494197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494242 | orchestrator | 2026-01-07 00:53:00.494246 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-07 00:53:00.494250 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:03.119) 0:04:34.961 ***** 2026-01-07 00:53:00.494254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:53:00.494272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494290 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:53:00.494298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494338 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:53:00.494372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:53:00.494378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:53:00.494413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:53:00.494419 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494425 | orchestrator | 2026-01-07 00:53:00.494432 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-07 00:53:00.494438 | orchestrator | Wednesday 07 January 2026 00:51:23 +0000 (0:00:01.106) 0:04:36.068 ***** 2026-01-07 00:53:00.494444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494457 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494468 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:53:00.494485 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494489 | orchestrator | 2026-01-07 00:53:00.494492 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-07 00:53:00.494496 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:00.831) 0:04:36.899 ***** 2026-01-07 00:53:00.494504 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.494508 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.494511 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.494515 | orchestrator | 2026-01-07 00:53:00.494519 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-07 00:53:00.494523 | orchestrator | Wednesday 07 January 2026 00:51:25 +0000 (0:00:01.112) 0:04:38.011 ***** 2026-01-07 00:53:00.494527 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.494530 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.494534 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.494538 | orchestrator | 2026-01-07 00:53:00.494542 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-07 00:53:00.494546 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:01.791) 0:04:39.803 ***** 2026-01-07 00:53:00.494549 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.494553 | orchestrator | 2026-01-07 00:53:00.494557 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-07 00:53:00.494561 | orchestrator | Wednesday 07 January 2026 00:51:28 +0000 (0:00:01.419) 0:04:41.223 ***** 2026-01-07 00:53:00.494578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.494585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.494589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.494596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:53:00.494613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:53:00.494619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:53:00.494623 | orchestrator | 2026-01-07 00:53:00.494627 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-07 00:53:00.494630 | orchestrator | Wednesday 07 January 2026 00:51:33 +0000 (0:00:04.955) 0:04:46.178 ***** 2026-01-07 00:53:00.494637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.494644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:53:00.494649 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.494668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:53:00.494673 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.494686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:53:00.494690 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494694 | orchestrator | 2026-01-07 00:53:00.494698 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-07 00:53:00.494702 | orchestrator | Wednesday 07 January 2026 00:51:34 +0000 (0:00:00.618) 0:04:46.796 ***** 2026-01-07 00:53:00.494706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.494719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494729 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.494738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.494742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494757 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-07 00:53:00.494768 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494771 | orchestrator | 2026-01-07 00:53:00.494775 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-07 00:53:00.494779 | orchestrator | Wednesday 07 January 2026 00:51:35 +0000 (0:00:01.490) 0:04:48.287 ***** 2026-01-07 00:53:00.494783 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494787 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494791 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494794 | orchestrator | 2026-01-07 00:53:00.494798 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-07 00:53:00.494802 | orchestrator | Wednesday 07 January 2026 00:51:35 +0000 (0:00:00.441) 0:04:48.728 ***** 2026-01-07 00:53:00.494806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.494809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.494813 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.494817 | orchestrator | 2026-01-07 00:53:00.494821 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-07 00:53:00.494825 | orchestrator | Wednesday 07 January 2026 00:51:37 +0000 (0:00:01.361) 0:04:50.089 ***** 2026-01-07 00:53:00.494828 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.494832 | orchestrator | 2026-01-07 00:53:00.494836 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-07 00:53:00.494840 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:01.659) 0:04:51.749 ***** 2026-01-07 00:53:00.494853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 00:53:00.494858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.494865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.494880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 00:53:00.494885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.494898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.494916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 00:53:00.494920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.494925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.494948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.494957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.494962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.494970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.494986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.494991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.494998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:53:00.495028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.495034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495052 | orchestrator | 2026-01-07 00:53:00.495056 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-07 00:53:00.495060 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:04.104) 0:04:55.854 ***** 2026-01-07 00:53:00.495073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 00:53:00.495080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.495084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 00:53:00.495105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.495112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.495116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.495120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 00:53:00.495154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:53:00.495164 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.495175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.495186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:53:00.495208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-07 00:53:00.495221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495224 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:53:00.495243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:53:00.495247 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495251 | orchestrator | 2026-01-07 00:53:00.495255 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-07 00:53:00.495264 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:00.807) 0:04:56.661 ***** 2026-01-07 00:53:00.495268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495289 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495309 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-07 00:53:00.495327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-07 00:53:00.495339 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495342 | orchestrator | 2026-01-07 00:53:00.495346 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-07 00:53:00.495350 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:01.373) 0:04:58.034 ***** 2026-01-07 00:53:00.495376 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495385 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495388 | orchestrator | 2026-01-07 00:53:00.495392 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-07 00:53:00.495396 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:00.505) 0:04:58.540 ***** 2026-01-07 00:53:00.495400 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495404 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495408 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495411 | orchestrator | 2026-01-07 00:53:00.495415 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-07 00:53:00.495419 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:01.310) 0:04:59.850 ***** 2026-01-07 00:53:00.495423 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.495427 | orchestrator | 2026-01-07 00:53:00.495431 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-07 00:53:00.495437 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:01.408) 0:05:01.259 ***** 2026-01-07 00:53:00.495441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:00.495445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:00.495456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:53:00.495461 | orchestrator | 2026-01-07 00:53:00.495465 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-07 00:53:00.495468 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:02.814) 0:05:04.073 ***** 2026-01-07 00:53:00.495474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:53:00.495479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:53:00.495483 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495487 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:53:00.495498 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495502 | orchestrator | 2026-01-07 00:53:00.495506 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-07 00:53:00.495512 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.418) 0:05:04.492 ***** 2026-01-07 00:53:00.495516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:53:00.495520 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:53:00.495528 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:53:00.495536 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495540 | orchestrator | 2026-01-07 00:53:00.495543 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-07 00:53:00.495547 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:00.640) 0:05:05.133 ***** 2026-01-07 00:53:00.495551 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495555 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495559 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495563 | orchestrator | 2026-01-07 00:53:00.495566 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-07 00:53:00.495573 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:01.292) 0:05:06.425 ***** 2026-01-07 00:53:00.495579 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495585 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495595 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495602 | orchestrator | 2026-01-07 00:53:00.495608 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-07 00:53:00.495613 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:01.078) 0:05:07.503 ***** 2026-01-07 00:53:00.495619 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.495625 | orchestrator | 2026-01-07 00:53:00.495632 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-07 00:53:00.495638 | orchestrator | Wednesday 07 January 2026 00:51:56 +0000 (0:00:01.817) 0:05:09.321 ***** 2026-01-07 00:53:00.495648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-07 00:53:00.495660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-07 00:53:00.495672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-07 00:53:00.495679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.495689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.495700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 00:53:00.495707 | orchestrator | 2026-01-07 00:53:00.495713 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-07 00:53:00.495720 | orchestrator | Wednesday 07 January 2026 00:52:02 +0000 (0:00:06.120) 0:05:15.441 ***** 2026-01-07 00:53:00.495730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-07 00:53:00.495737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.495743 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-07 00:53:00.495765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.495771 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-07 00:53:00.495787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 00:53:00.495795 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495806 | orchestrator | 2026-01-07 00:53:00.495812 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-07 00:53:00.495817 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:01.321) 0:05:16.763 ***** 2026-01-07 00:53:00.495823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495848 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.495854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495880 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.495885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-07 00:53:00.495897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-07 00:53:00.495909 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.495922 | orchestrator | 2026-01-07 00:53:00.495928 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-07 00:53:00.495934 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:00.924) 0:05:17.687 ***** 2026-01-07 00:53:00.495940 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.495945 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.495952 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.495958 | orchestrator | 2026-01-07 00:53:00.495964 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-07 00:53:00.495970 | orchestrator | Wednesday 07 January 2026 00:52:06 +0000 (0:00:01.063) 0:05:18.750 ***** 2026-01-07 00:53:00.495976 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.495985 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.495991 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.495995 | orchestrator | 2026-01-07 00:53:00.495999 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-07 00:53:00.496003 | orchestrator | Wednesday 07 January 2026 00:52:07 +0000 (0:00:01.911) 0:05:20.662 ***** 2026-01-07 00:53:00.496007 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496011 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496014 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496018 | orchestrator | 2026-01-07 00:53:00.496022 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-07 00:53:00.496026 | orchestrator | Wednesday 07 January 2026 00:52:08 +0000 (0:00:00.352) 0:05:21.014 ***** 2026-01-07 00:53:00.496030 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496033 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496037 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496041 | orchestrator | 2026-01-07 00:53:00.496045 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-07 00:53:00.496049 | orchestrator | Wednesday 07 January 2026 00:52:08 +0000 (0:00:00.705) 0:05:21.720 ***** 2026-01-07 00:53:00.496052 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496056 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496060 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496064 | orchestrator | 2026-01-07 00:53:00.496068 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-07 00:53:00.496071 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.313) 0:05:22.034 ***** 2026-01-07 00:53:00.496075 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496079 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496083 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496087 | orchestrator | 2026-01-07 00:53:00.496090 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-07 00:53:00.496094 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.324) 0:05:22.358 ***** 2026-01-07 00:53:00.496098 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496102 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496106 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496109 | orchestrator | 2026-01-07 00:53:00.496113 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-07 00:53:00.496117 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.314) 0:05:22.673 ***** 2026-01-07 00:53:00.496121 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:00.496124 | orchestrator | 2026-01-07 00:53:00.496128 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-07 00:53:00.496132 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:01.841) 0:05:24.515 ***** 2026-01-07 00:53:00.496139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:53:00.496172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.496180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.496184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:53:00.496188 | orchestrator | 2026-01-07 00:53:00.496192 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-07 00:53:00.496196 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:02.526) 0:05:27.041 ***** 2026-01-07 00:53:00.496200 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:53:00.496204 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.496208 | orchestrator | } 2026-01-07 00:53:00.496212 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:53:00.496216 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.496220 | orchestrator | } 2026-01-07 00:53:00.496224 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:53:00.496228 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:53:00.496231 | orchestrator | } 2026-01-07 00:53:00.496235 | orchestrator | 2026-01-07 00:53:00.496239 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:53:00.496243 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.703) 0:05:27.745 ***** 2026-01-07 00:53:00.496249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.496254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.496258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.496264 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.496275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.496279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.496283 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:53:00.496294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:53:00.496298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:53:00.496302 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496306 | orchestrator | 2026-01-07 00:53:00.496310 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-07 00:53:00.496318 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:01.553) 0:05:29.299 ***** 2026-01-07 00:53:00.496322 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496326 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496330 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496334 | orchestrator | 2026-01-07 00:53:00.496338 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-07 00:53:00.496341 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.633) 0:05:29.932 ***** 2026-01-07 00:53:00.496345 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496349 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496353 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496397 | orchestrator | 2026-01-07 00:53:00.496401 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-07 00:53:00.496405 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.442) 0:05:30.375 ***** 2026-01-07 00:53:00.496409 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496413 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496416 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496420 | orchestrator | 2026-01-07 00:53:00.496424 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-07 00:53:00.496428 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.774) 0:05:31.149 ***** 2026-01-07 00:53:00.496432 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496435 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496439 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496443 | orchestrator | 2026-01-07 00:53:00.496447 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-07 00:53:00.496450 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.750) 0:05:31.900 ***** 2026-01-07 00:53:00.496457 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496461 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496464 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496468 | orchestrator | 2026-01-07 00:53:00.496472 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-07 00:53:00.496476 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:01.220) 0:05:33.121 ***** 2026-01-07 00:53:00.496480 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.496483 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.496487 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.496491 | orchestrator | 2026-01-07 00:53:00.496495 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-07 00:53:00.496499 | orchestrator | Wednesday 07 January 2026 00:52:29 +0000 (0:00:09.606) 0:05:42.728 ***** 2026-01-07 00:53:00.496502 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496506 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496510 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496514 | orchestrator | 2026-01-07 00:53:00.496517 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-07 00:53:00.496521 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:00.790) 0:05:43.518 ***** 2026-01-07 00:53:00.496525 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.496529 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.496533 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.496537 | orchestrator | 2026-01-07 00:53:00.496540 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-07 00:53:00.496544 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:08.678) 0:05:52.197 ***** 2026-01-07 00:53:00.496548 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496552 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496555 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496559 | orchestrator | 2026-01-07 00:53:00.496563 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-07 00:53:00.496567 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:04.117) 0:05:56.315 ***** 2026-01-07 00:53:00.496571 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:00.496578 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:00.496582 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:00.496585 | orchestrator | 2026-01-07 00:53:00.496589 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-07 00:53:00.496593 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:08.578) 0:06:04.893 ***** 2026-01-07 00:53:00.496597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496600 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496604 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496608 | orchestrator | 2026-01-07 00:53:00.496612 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-07 00:53:00.496616 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:00.357) 0:06:05.250 ***** 2026-01-07 00:53:00.496619 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496626 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496630 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496634 | orchestrator | 2026-01-07 00:53:00.496637 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-07 00:53:00.496641 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:00.334) 0:06:05.585 ***** 2026-01-07 00:53:00.496645 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496649 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496652 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496656 | orchestrator | 2026-01-07 00:53:00.496660 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-07 00:53:00.496664 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:00.756) 0:06:06.342 ***** 2026-01-07 00:53:00.496668 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496671 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496675 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496679 | orchestrator | 2026-01-07 00:53:00.496683 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-07 00:53:00.496687 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:00.357) 0:06:06.700 ***** 2026-01-07 00:53:00.496690 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496694 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496698 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496702 | orchestrator | 2026-01-07 00:53:00.496706 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-07 00:53:00.496709 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:00.426) 0:06:07.126 ***** 2026-01-07 00:53:00.496713 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:00.496717 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:00.496721 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:00.496725 | orchestrator | 2026-01-07 00:53:00.496729 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-07 00:53:00.496735 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:00.346) 0:06:07.473 ***** 2026-01-07 00:53:00.496740 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496746 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496752 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496758 | orchestrator | 2026-01-07 00:53:00.496764 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-07 00:53:00.496770 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:01.455) 0:06:08.929 ***** 2026-01-07 00:53:00.496776 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:00.496781 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:00.496787 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:00.496792 | orchestrator | 2026-01-07 00:53:00.496798 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:00.496804 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-07 00:53:00.496811 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-07 00:53:00.496822 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-07 00:53:00.496828 | orchestrator | 2026-01-07 00:53:00.496834 | orchestrator | 2026-01-07 00:53:00.496887 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:00.496903 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:00.894) 0:06:09.823 ***** 2026-01-07 00:53:00.496909 | orchestrator | =============================================================================== 2026-01-07 00:53:00.496915 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.61s 2026-01-07 00:53:00.496922 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.68s 2026-01-07 00:53:00.496927 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.58s 2026-01-07 00:53:00.496931 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.12s 2026-01-07 00:53:00.496935 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.04s 2026-01-07 00:53:00.496939 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.76s 2026-01-07 00:53:00.496943 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.46s 2026-01-07 00:53:00.496946 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.96s 2026-01-07 00:53:00.496950 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.84s 2026-01-07 00:53:00.496954 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.43s 2026-01-07 00:53:00.496958 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.18s 2026-01-07 00:53:00.496961 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.12s 2026-01-07 00:53:00.496965 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.10s 2026-01-07 00:53:00.496969 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.07s 2026-01-07 00:53:00.496972 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.03s 2026-01-07 00:53:00.496977 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.95s 2026-01-07 00:53:00.496982 | orchestrator | mariadb : Ensure mysql monitor user exist ------------------------------- 3.90s 2026-01-07 00:53:00.496989 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.89s 2026-01-07 00:53:00.496993 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.89s 2026-01-07 00:53:00.496997 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.75s 2026-01-07 00:53:03.519906 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:03.521212 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:03.522853 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:03.522896 | orchestrator | 2026-01-07 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:06.550481 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:06.552937 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:06.553024 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:06.553035 | orchestrator | 2026-01-07 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:09.586905 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:09.587507 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:09.588038 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:09.588191 | orchestrator | 2026-01-07 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:12.619583 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:12.619754 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:12.620592 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:12.620754 | orchestrator | 2026-01-07 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:15.655909 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:15.656400 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:15.657349 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:15.657396 | orchestrator | 2026-01-07 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:18.700398 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:18.701749 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:18.701787 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:18.701792 | orchestrator | 2026-01-07 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:21.739861 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:21.740377 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:21.741106 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:21.741145 | orchestrator | 2026-01-07 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:24.777423 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:24.777777 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:24.778646 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:24.778698 | orchestrator | 2026-01-07 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:27.816099 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:27.819841 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:27.820877 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:27.821387 | orchestrator | 2026-01-07 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:30.862672 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:30.864923 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:30.867880 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:30.868239 | orchestrator | 2026-01-07 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:33.917402 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:33.917457 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:33.917468 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:33.917479 | orchestrator | 2026-01-07 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:36.964917 | orchestrator | 2026-01-07 00:53:36 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:36.964977 | orchestrator | 2026-01-07 00:53:36 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:36.965950 | orchestrator | 2026-01-07 00:53:36 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:36.966694 | orchestrator | 2026-01-07 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:40.028109 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:40.030320 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:40.033154 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:40.033202 | orchestrator | 2026-01-07 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:43.087433 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:43.089951 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:43.090586 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:43.090619 | orchestrator | 2026-01-07 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:46.128798 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:46.128874 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:46.129615 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:46.129659 | orchestrator | 2026-01-07 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:49.166060 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:49.168217 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:49.169925 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:49.170117 | orchestrator | 2026-01-07 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:52.207553 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:52.209737 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:52.212217 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:52.212329 | orchestrator | 2026-01-07 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:55.250410 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:55.250723 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:55.251866 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:55.251898 | orchestrator | 2026-01-07 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:58.298453 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:53:58.299882 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:53:58.301075 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:53:58.301124 | orchestrator | 2026-01-07 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:01.339199 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:01.340103 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:01.341639 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:01.341676 | orchestrator | 2026-01-07 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:04.379960 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:04.382352 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:04.383179 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:04.383310 | orchestrator | 2026-01-07 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:07.434988 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:07.436846 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:07.438810 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:07.438991 | orchestrator | 2026-01-07 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:10.481177 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:10.484007 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:10.486091 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:10.486171 | orchestrator | 2026-01-07 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:13.526580 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:13.529860 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:13.532001 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:13.532081 | orchestrator | 2026-01-07 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:16.565885 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:16.566132 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:16.567281 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:16.567334 | orchestrator | 2026-01-07 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:19.611772 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:19.613544 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:19.615666 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:19.615717 | orchestrator | 2026-01-07 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:22.664315 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:22.666132 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:22.668478 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:22.668626 | orchestrator | 2026-01-07 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:25.712698 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:25.715139 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:25.716867 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:25.716899 | orchestrator | 2026-01-07 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:28.762468 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:28.762793 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:28.764672 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:28.764802 | orchestrator | 2026-01-07 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:31.809411 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:31.810514 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:31.813931 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:31.813994 | orchestrator | 2026-01-07 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:34.867779 | orchestrator | 2026-01-07 00:54:34 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:34.867889 | orchestrator | 2026-01-07 00:54:34 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:34.868736 | orchestrator | 2026-01-07 00:54:34 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:34.868865 | orchestrator | 2026-01-07 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:37.924530 | orchestrator | 2026-01-07 00:54:37 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:37.929851 | orchestrator | 2026-01-07 00:54:37 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:37.932371 | orchestrator | 2026-01-07 00:54:37 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:37.933170 | orchestrator | 2026-01-07 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:40.991153 | orchestrator | 2026-01-07 00:54:40 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:40.993500 | orchestrator | 2026-01-07 00:54:40 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:40.995663 | orchestrator | 2026-01-07 00:54:40 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:40.995720 | orchestrator | 2026-01-07 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:44.042300 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:44.044392 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:44.046396 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:44.046440 | orchestrator | 2026-01-07 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:47.097055 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:47.097343 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:47.098751 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:47.098782 | orchestrator | 2026-01-07 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:50.145802 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:50.147660 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:50.149272 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:50.149505 | orchestrator | 2026-01-07 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:53.192993 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:53.194415 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:53.195658 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:53.195921 | orchestrator | 2026-01-07 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:56.243620 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state STARTED 2026-01-07 00:54:56.246279 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:56.248344 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:56.248427 | orchestrator | 2026-01-07 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:59.307104 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task fcbfd303-c2f3-436a-ab34-b8173daf0619 is in state SUCCESS 2026-01-07 00:54:59.308792 | orchestrator | 2026-01-07 00:54:59.308862 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:54:59.308869 | orchestrator | 2.16.14 2026-01-07 00:54:59.308874 | orchestrator | 2026-01-07 00:54:59.308878 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-07 00:54:59.308883 | orchestrator | 2026-01-07 00:54:59.308887 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 00:54:59.308892 | orchestrator | Wednesday 07 January 2026 00:44:22 +0000 (0:00:00.665) 0:00:00.666 ***** 2026-01-07 00:54:59.308897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.308903 | orchestrator | 2026-01-07 00:54:59.308910 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 00:54:59.308916 | orchestrator | Wednesday 07 January 2026 00:44:23 +0000 (0:00:01.039) 0:00:01.705 ***** 2026-01-07 00:54:59.308922 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.308928 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.308933 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.308939 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.308948 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.308953 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.308959 | orchestrator | 2026-01-07 00:54:59.308965 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 00:54:59.308971 | orchestrator | Wednesday 07 January 2026 00:44:24 +0000 (0:00:01.539) 0:00:03.244 ***** 2026-01-07 00:54:59.308977 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.308984 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309129 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309142 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309146 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309150 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309154 | orchestrator | 2026-01-07 00:54:59.309157 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 00:54:59.309162 | orchestrator | Wednesday 07 January 2026 00:44:25 +0000 (0:00:00.653) 0:00:03.898 ***** 2026-01-07 00:54:59.309166 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.309170 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309173 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309178 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309184 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309192 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309201 | orchestrator | 2026-01-07 00:54:59.309207 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 00:54:59.309213 | orchestrator | Wednesday 07 January 2026 00:44:26 +0000 (0:00:01.099) 0:00:04.997 ***** 2026-01-07 00:54:59.309218 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.309224 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309229 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309235 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309240 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309416 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309426 | orchestrator | 2026-01-07 00:54:59.309430 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 00:54:59.309434 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.713) 0:00:05.710 ***** 2026-01-07 00:54:59.309437 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.309441 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309445 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309448 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309452 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309456 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309459 | orchestrator | 2026-01-07 00:54:59.309480 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 00:54:59.309484 | orchestrator | Wednesday 07 January 2026 00:44:27 +0000 (0:00:00.630) 0:00:06.341 ***** 2026-01-07 00:54:59.309488 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.309491 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309495 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309499 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309588 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309597 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309602 | orchestrator | 2026-01-07 00:54:59.309609 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 00:54:59.309632 | orchestrator | Wednesday 07 January 2026 00:44:28 +0000 (0:00:00.907) 0:00:07.248 ***** 2026-01-07 00:54:59.309639 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.309646 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.309652 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.309658 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.309664 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.309670 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.309676 | orchestrator | 2026-01-07 00:54:59.309682 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 00:54:59.309686 | orchestrator | Wednesday 07 January 2026 00:44:29 +0000 (0:00:01.044) 0:00:08.293 ***** 2026-01-07 00:54:59.309691 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.309694 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.309698 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.309702 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.309706 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.309709 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.309713 | orchestrator | 2026-01-07 00:54:59.309717 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 00:54:59.309923 | orchestrator | Wednesday 07 January 2026 00:44:30 +0000 (0:00:00.944) 0:00:09.237 ***** 2026-01-07 00:54:59.310130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:54:59.310141 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.310147 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.310251 | orchestrator | 2026-01-07 00:54:59.310260 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 00:54:59.310266 | orchestrator | Wednesday 07 January 2026 00:44:31 +0000 (0:00:00.763) 0:00:10.001 ***** 2026-01-07 00:54:59.310273 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.310279 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.310285 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.310327 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.310334 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.310340 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.310345 | orchestrator | 2026-01-07 00:54:59.310351 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 00:54:59.310358 | orchestrator | Wednesday 07 January 2026 00:44:32 +0000 (0:00:01.124) 0:00:11.125 ***** 2026-01-07 00:54:59.310365 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:54:59.310372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.310381 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.310388 | orchestrator | 2026-01-07 00:54:59.310395 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 00:54:59.310401 | orchestrator | Wednesday 07 January 2026 00:44:35 +0000 (0:00:02.752) 0:00:13.878 ***** 2026-01-07 00:54:59.310408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:54:59.310414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:54:59.310756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:54:59.310777 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.310783 | orchestrator | 2026-01-07 00:54:59.310790 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 00:54:59.310796 | orchestrator | Wednesday 07 January 2026 00:44:36 +0000 (0:00:00.559) 0:00:14.438 ***** 2026-01-07 00:54:59.310809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.310818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.310949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.310964 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.310971 | orchestrator | 2026-01-07 00:54:59.310976 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 00:54:59.310983 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:01.165) 0:00:15.603 ***** 2026-01-07 00:54:59.312960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.312997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.313002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.313007 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.313011 | orchestrator | 2026-01-07 00:54:59.313015 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 00:54:59.313020 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.317) 0:00:15.921 ***** 2026-01-07 00:54:59.313115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:44:33.388152', 'end': '2026-01-07 00:44:33.651998', 'delta': '0:00:00.263846', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.313128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:44:34.272522', 'end': '2026-01-07 00:44:34.603308', 'delta': '0:00:00.330786', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.313150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:44:35.062365', 'end': '2026-01-07 00:44:35.355988', 'delta': '0:00:00.293623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.313154 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.313158 | orchestrator | 2026-01-07 00:54:59.313162 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 00:54:59.313166 | orchestrator | Wednesday 07 January 2026 00:44:37 +0000 (0:00:00.225) 0:00:16.146 ***** 2026-01-07 00:54:59.313169 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.313173 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.313177 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.313181 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.313185 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.313188 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.313192 | orchestrator | 2026-01-07 00:54:59.313196 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 00:54:59.313200 | orchestrator | Wednesday 07 January 2026 00:44:39 +0000 (0:00:01.553) 0:00:17.700 ***** 2026-01-07 00:54:59.313272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.313278 | orchestrator | 2026-01-07 00:54:59.313285 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 00:54:59.313291 | orchestrator | Wednesday 07 January 2026 00:44:39 +0000 (0:00:00.630) 0:00:18.330 ***** 2026-01-07 00:54:59.313297 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.313302 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315206 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315268 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315274 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315279 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315283 | orchestrator | 2026-01-07 00:54:59.315288 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 00:54:59.315293 | orchestrator | Wednesday 07 January 2026 00:44:41 +0000 (0:00:01.539) 0:00:19.869 ***** 2026-01-07 00:54:59.315297 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315301 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315305 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315308 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315313 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315321 | orchestrator | 2026-01-07 00:54:59.315325 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:54:59.315329 | orchestrator | Wednesday 07 January 2026 00:44:42 +0000 (0:00:01.424) 0:00:21.294 ***** 2026-01-07 00:54:59.315332 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315353 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315358 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315361 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315365 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315369 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315373 | orchestrator | 2026-01-07 00:54:59.315377 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 00:54:59.315381 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:01.515) 0:00:22.809 ***** 2026-01-07 00:54:59.315385 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315389 | orchestrator | 2026-01-07 00:54:59.315392 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 00:54:59.315396 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:00.181) 0:00:22.991 ***** 2026-01-07 00:54:59.315400 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315404 | orchestrator | 2026-01-07 00:54:59.315407 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:54:59.315411 | orchestrator | Wednesday 07 January 2026 00:44:44 +0000 (0:00:00.314) 0:00:23.305 ***** 2026-01-07 00:54:59.315415 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315419 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315423 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315557 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315565 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315569 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315572 | orchestrator | 2026-01-07 00:54:59.315577 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 00:54:59.315580 | orchestrator | Wednesday 07 January 2026 00:44:45 +0000 (0:00:01.000) 0:00:24.306 ***** 2026-01-07 00:54:59.315584 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315588 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315594 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315600 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315605 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315615 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315622 | orchestrator | 2026-01-07 00:54:59.315627 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 00:54:59.315633 | orchestrator | Wednesday 07 January 2026 00:44:46 +0000 (0:00:00.904) 0:00:25.211 ***** 2026-01-07 00:54:59.315639 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315645 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315651 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315655 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315659 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315663 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315666 | orchestrator | 2026-01-07 00:54:59.315670 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 00:54:59.315674 | orchestrator | Wednesday 07 January 2026 00:44:47 +0000 (0:00:00.664) 0:00:25.875 ***** 2026-01-07 00:54:59.315678 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315682 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315693 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315698 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315702 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315705 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315709 | orchestrator | 2026-01-07 00:54:59.315713 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 00:54:59.315717 | orchestrator | Wednesday 07 January 2026 00:44:48 +0000 (0:00:00.950) 0:00:26.825 ***** 2026-01-07 00:54:59.315720 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315724 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315728 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315731 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315740 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315744 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315748 | orchestrator | 2026-01-07 00:54:59.315752 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 00:54:59.315755 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.611) 0:00:27.437 ***** 2026-01-07 00:54:59.315759 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315763 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315767 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315770 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315774 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315778 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315781 | orchestrator | 2026-01-07 00:54:59.315785 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 00:54:59.315790 | orchestrator | Wednesday 07 January 2026 00:44:49 +0000 (0:00:00.726) 0:00:28.164 ***** 2026-01-07 00:54:59.315794 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.315798 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.315801 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.315805 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.315809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.315812 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.315816 | orchestrator | 2026-01-07 00:54:59.315820 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 00:54:59.315824 | orchestrator | Wednesday 07 January 2026 00:44:50 +0000 (0:00:00.698) 0:00:28.862 ***** 2026-01-07 00:54:59.315848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8', 'dm-uuid-LVM-V5VfCcYGKl4Bnur1uNiNQmiaWW7ddFt6yluzzHTHlitz361XN7j045GmAnuIzDE8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b', 'dm-uuid-LVM-wc4kI6OqwAscwmmkndvJZpG8N6izNye5C1HoNvgtq5hrMpHm7PsUJkX9BVSVRxeq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.315967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bobOpU-MYfr-y4Ef-vOoQ-ehNp-x3D7-cMoYMn', 'scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a', 'scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFdUZj-kLPS-OcbZ-VKZl-KciB-jGso-qCnklM', 'scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9', 'scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4', 'scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8', 'dm-uuid-LVM-Zr5ep2rmKcYwUjCbEzdIFdOSaEWKuROcCxfBJuzGV2HesNAu0o0smJSLlEI3yrFN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d', 'dm-uuid-LVM-i0wlbpFvqRihHUwefM4dHK3dwlVDMjbtySInas1puXuPoXmmLkM0U18P6QAryVUz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955', 'dm-uuid-LVM-Rujcq0UkmlYflKsC4fd33Dkl5dBRwjS65A8s9BZ4s4y1kvUR8RL7YEeRphzA7scE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637', 'dm-uuid-LVM-iTQuPrx0FTrMFHWPXcV7DY3IVPYTbJBCy5T5YfzJ7HdIkSfe6dduLErg3NlIYd5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oeVac9-FCb1-x3bL-GDLT-GpDm-oKMP-vtGmCV', 'scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83', 'scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316363 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.316373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7fOV3-0dfJ-XnKV-BIi2-7zfJ-1I70-sGC1en', 'scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d', 'scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8', 'scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316497 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.316501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316686 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.316691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eH03hQ-AV7L-sq1w-ZK3M-bMB9-XpM9-NquE15', 'scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb', 'scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sHkOms-DWVH-8KmS-PD8X-N66N-HIm4-33sBky', 'scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6', 'scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e', 'scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316769 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.316811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.316829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316835 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.316841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:54:59.316950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.317009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:54:59.317017 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.317023 | orchestrator | 2026-01-07 00:54:59.317029 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 00:54:59.317035 | orchestrator | Wednesday 07 January 2026 00:44:51 +0000 (0:00:01.475) 0:00:30.338 ***** 2026-01-07 00:54:59.317043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8', 'dm-uuid-LVM-V5VfCcYGKl4Bnur1uNiNQmiaWW7ddFt6yluzzHTHlitz361XN7j045GmAnuIzDE8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b', 'dm-uuid-LVM-wc4kI6OqwAscwmmkndvJZpG8N6izNye5C1HoNvgtq5hrMpHm7PsUJkX9BVSVRxeq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317166 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8', 'dm-uuid-LVM-Zr5ep2rmKcYwUjCbEzdIFdOSaEWKuROcCxfBJuzGV2HesNAu0o0smJSLlEI3yrFN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317184 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d', 'dm-uuid-LVM-i0wlbpFvqRihHUwefM4dHK3dwlVDMjbtySInas1puXuPoXmmLkM0U18P6QAryVUz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bobOpU-MYfr-y4Ef-vOoQ-ehNp-x3D7-cMoYMn', 'scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a', 'scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFdUZj-kLPS-OcbZ-VKZl-KciB-jGso-qCnklM', 'scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9', 'scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4', 'scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955', 'dm-uuid-LVM-Rujcq0UkmlYflKsC4fd33Dkl5dBRwjS65A8s9BZ4s4y1kvUR8RL7YEeRphzA7scE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317406 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637', 'dm-uuid-LVM-iTQuPrx0FTrMFHWPXcV7DY3IVPYTbJBCy5T5YfzJ7HdIkSfe6dduLErg3NlIYd5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317450 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317454 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.317461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oeVac9-FCb1-x3bL-GDLT-GpDm-oKMP-vtGmCV', 'scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83', 'scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317526 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317533 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7fOV3-0dfJ-XnKV-BIi2-7zfJ-1I70-sGC1en', 'scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d', 'scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317614 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317639 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8', 'scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ece979bd-9733-4959-9aed-a1dff3c9e3f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317702 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317711 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.317753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317776 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317780 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317784 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317816 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317822 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317832 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317840 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317844 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.317877 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d1c7d3d-c80b-49ab-9488-e886539f8993-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317893 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317898 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.317935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eH03hQ-AV7L-sq1w-ZK3M-bMB9-XpM9-NquE15', 'scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb', 'scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sHkOms-DWVH-8KmS-PD8X-N66N-HIm4-33sBky', 'scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6', 'scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e', 'scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.317991 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318008 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318091 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318105 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318114 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318121 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318194 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318206 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba965e08-437e-49d6-b05c-3ab1c9739c43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:54:59.318222 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318228 | orchestrator | 2026-01-07 00:54:59.318277 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 00:54:59.318285 | orchestrator | Wednesday 07 January 2026 00:44:52 +0000 (0:00:00.892) 0:00:31.230 ***** 2026-01-07 00:54:59.318289 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.318297 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.318301 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.318305 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.318309 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.318313 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.318317 | orchestrator | 2026-01-07 00:54:59.318321 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 00:54:59.318325 | orchestrator | Wednesday 07 January 2026 00:44:53 +0000 (0:00:01.086) 0:00:32.317 ***** 2026-01-07 00:54:59.318329 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.318333 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.318336 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.318340 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.318344 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.318347 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.318351 | orchestrator | 2026-01-07 00:54:59.318355 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:54:59.318359 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:01.129) 0:00:33.447 ***** 2026-01-07 00:54:59.318363 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318367 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318371 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318374 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318378 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318382 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318385 | orchestrator | 2026-01-07 00:54:59.318389 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:54:59.318397 | orchestrator | Wednesday 07 January 2026 00:44:55 +0000 (0:00:00.683) 0:00:34.131 ***** 2026-01-07 00:54:59.318401 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318405 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318408 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318414 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318420 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318428 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318437 | orchestrator | 2026-01-07 00:54:59.318442 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:54:59.318448 | orchestrator | Wednesday 07 January 2026 00:44:56 +0000 (0:00:00.589) 0:00:34.720 ***** 2026-01-07 00:54:59.318454 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318460 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318465 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318476 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318482 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318488 | orchestrator | 2026-01-07 00:54:59.318494 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:54:59.318499 | orchestrator | Wednesday 07 January 2026 00:44:57 +0000 (0:00:00.806) 0:00:35.527 ***** 2026-01-07 00:54:59.318505 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318511 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318517 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318523 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318529 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318535 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318542 | orchestrator | 2026-01-07 00:54:59.318548 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 00:54:59.318554 | orchestrator | Wednesday 07 January 2026 00:44:57 +0000 (0:00:00.565) 0:00:36.092 ***** 2026-01-07 00:54:59.318559 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 00:54:59.318567 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 00:54:59.318574 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 00:54:59.318579 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 00:54:59.318591 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 00:54:59.318597 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 00:54:59.318603 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 00:54:59.318609 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 00:54:59.318615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:54:59.318621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 00:54:59.318626 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-07 00:54:59.318630 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 00:54:59.318634 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 00:54:59.318638 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-07 00:54:59.318642 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-07 00:54:59.318646 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-07 00:54:59.318650 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-07 00:54:59.318653 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-07 00:54:59.318657 | orchestrator | 2026-01-07 00:54:59.318661 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 00:54:59.318665 | orchestrator | Wednesday 07 January 2026 00:44:59 +0000 (0:00:02.152) 0:00:38.244 ***** 2026-01-07 00:54:59.318669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:54:59.318673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:54:59.318677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:54:59.318681 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:54:59.318689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:54:59.318692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:54:59.318696 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:54:59.318730 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:54:59.318735 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:54:59.318739 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:54:59.318747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:54:59.318751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:54:59.318755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:54:59.318758 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:54:59.318762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:54:59.318766 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318770 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:54:59.318778 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:54:59.318781 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:54:59.318785 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318789 | orchestrator | 2026-01-07 00:54:59.318793 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 00:54:59.318797 | orchestrator | Wednesday 07 January 2026 00:45:00 +0000 (0:00:00.576) 0:00:38.821 ***** 2026-01-07 00:54:59.318801 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.318805 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.318809 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.318814 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.318826 | orchestrator | 2026-01-07 00:54:59.318831 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:54:59.318836 | orchestrator | Wednesday 07 January 2026 00:45:01 +0000 (0:00:00.951) 0:00:39.773 ***** 2026-01-07 00:54:59.318840 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318844 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318848 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318852 | orchestrator | 2026-01-07 00:54:59.318856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:54:59.318860 | orchestrator | Wednesday 07 January 2026 00:45:01 +0000 (0:00:00.301) 0:00:40.074 ***** 2026-01-07 00:54:59.318864 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318868 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318872 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318875 | orchestrator | 2026-01-07 00:54:59.318879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:54:59.318883 | orchestrator | Wednesday 07 January 2026 00:45:02 +0000 (0:00:00.306) 0:00:40.380 ***** 2026-01-07 00:54:59.318887 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318891 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.318895 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.318899 | orchestrator | 2026-01-07 00:54:59.318903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:54:59.318906 | orchestrator | Wednesday 07 January 2026 00:45:02 +0000 (0:00:00.464) 0:00:40.845 ***** 2026-01-07 00:54:59.318911 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.318915 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.318920 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.318924 | orchestrator | 2026-01-07 00:54:59.318929 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:54:59.318934 | orchestrator | Wednesday 07 January 2026 00:45:02 +0000 (0:00:00.338) 0:00:41.184 ***** 2026-01-07 00:54:59.318938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.318943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.318947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.318951 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318956 | orchestrator | 2026-01-07 00:54:59.318960 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:54:59.318965 | orchestrator | Wednesday 07 January 2026 00:45:03 +0000 (0:00:00.369) 0:00:41.554 ***** 2026-01-07 00:54:59.318969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.318973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.318978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.318983 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.318987 | orchestrator | 2026-01-07 00:54:59.318991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:54:59.318996 | orchestrator | Wednesday 07 January 2026 00:45:03 +0000 (0:00:00.441) 0:00:41.995 ***** 2026-01-07 00:54:59.319000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.319005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.319010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.319014 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319018 | orchestrator | 2026-01-07 00:54:59.319022 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:54:59.319027 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.420) 0:00:42.416 ***** 2026-01-07 00:54:59.319030 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319034 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319042 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319045 | orchestrator | 2026-01-07 00:54:59.319097 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:54:59.319103 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.652) 0:00:43.069 ***** 2026-01-07 00:54:59.319107 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:54:59.319111 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:54:59.319134 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:54:59.319140 | orchestrator | 2026-01-07 00:54:59.319144 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 00:54:59.319148 | orchestrator | Wednesday 07 January 2026 00:45:06 +0000 (0:00:01.476) 0:00:44.545 ***** 2026-01-07 00:54:59.319211 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:54:59.319236 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.319242 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.319249 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:54:59.319255 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:54:59.319262 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:54:59.319268 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:54:59.319274 | orchestrator | 2026-01-07 00:54:59.319280 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 00:54:59.319287 | orchestrator | Wednesday 07 January 2026 00:45:06 +0000 (0:00:00.736) 0:00:45.282 ***** 2026-01-07 00:54:59.319294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:54:59.319305 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.319311 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.319318 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:54:59.319324 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:54:59.319330 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:54:59.319336 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:54:59.319342 | orchestrator | 2026-01-07 00:54:59.319348 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.319354 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:01.828) 0:00:47.110 ***** 2026-01-07 00:54:59.319362 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.319405 | orchestrator | 2026-01-07 00:54:59.319411 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.319415 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:01.311) 0:00:48.423 ***** 2026-01-07 00:54:59.319420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.319424 | orchestrator | 2026-01-07 00:54:59.319428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.319432 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:01.361) 0:00:49.784 ***** 2026-01-07 00:54:59.319436 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319440 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319444 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319448 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.319459 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.319463 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.319468 | orchestrator | 2026-01-07 00:54:59.319472 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.319475 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:01.042) 0:00:50.827 ***** 2026-01-07 00:54:59.319479 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319483 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319487 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319491 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319495 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319498 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319502 | orchestrator | 2026-01-07 00:54:59.319507 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.319511 | orchestrator | Wednesday 07 January 2026 00:45:13 +0000 (0:00:00.959) 0:00:51.787 ***** 2026-01-07 00:54:59.319515 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319519 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319524 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319531 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319537 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319544 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319550 | orchestrator | 2026-01-07 00:54:59.319556 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.319562 | orchestrator | Wednesday 07 January 2026 00:45:14 +0000 (0:00:00.657) 0:00:52.444 ***** 2026-01-07 00:54:59.319568 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319575 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319589 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319595 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319601 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319607 | orchestrator | 2026-01-07 00:54:59.319615 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.319619 | orchestrator | Wednesday 07 January 2026 00:45:14 +0000 (0:00:00.731) 0:00:53.176 ***** 2026-01-07 00:54:59.319623 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319627 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319630 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319634 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.319638 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.319677 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.319682 | orchestrator | 2026-01-07 00:54:59.319686 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.319690 | orchestrator | Wednesday 07 January 2026 00:45:15 +0000 (0:00:00.983) 0:00:54.160 ***** 2026-01-07 00:54:59.319694 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319698 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319702 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319706 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319710 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319714 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319717 | orchestrator | 2026-01-07 00:54:59.319721 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.319725 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.595) 0:00:54.755 ***** 2026-01-07 00:54:59.319729 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319732 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319736 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319741 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319745 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319748 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319752 | orchestrator | 2026-01-07 00:54:59.319756 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.319765 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.629) 0:00:55.385 ***** 2026-01-07 00:54:59.319769 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319773 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319777 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319780 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.319784 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.319792 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.319796 | orchestrator | 2026-01-07 00:54:59.319800 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.319804 | orchestrator | Wednesday 07 January 2026 00:45:18 +0000 (0:00:01.071) 0:00:56.456 ***** 2026-01-07 00:54:59.319808 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319812 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319815 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319819 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.319823 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.319827 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.319831 | orchestrator | 2026-01-07 00:54:59.319835 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.319839 | orchestrator | Wednesday 07 January 2026 00:45:19 +0000 (0:00:01.670) 0:00:58.126 ***** 2026-01-07 00:54:59.319843 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319847 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319850 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319854 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319858 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319862 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319866 | orchestrator | 2026-01-07 00:54:59.319870 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.319874 | orchestrator | Wednesday 07 January 2026 00:45:20 +0000 (0:00:00.594) 0:00:58.720 ***** 2026-01-07 00:54:59.319877 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.319881 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.319885 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.319889 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.319893 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.319897 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.319901 | orchestrator | 2026-01-07 00:54:59.319905 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.319909 | orchestrator | Wednesday 07 January 2026 00:45:21 +0000 (0:00:00.928) 0:00:59.649 ***** 2026-01-07 00:54:59.319913 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319917 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319921 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319925 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319929 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319933 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319936 | orchestrator | 2026-01-07 00:54:59.319941 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.319944 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:01.040) 0:01:00.690 ***** 2026-01-07 00:54:59.319948 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319952 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319956 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319960 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.319964 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.319968 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.319972 | orchestrator | 2026-01-07 00:54:59.319976 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.319979 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:01.379) 0:01:02.069 ***** 2026-01-07 00:54:59.319983 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.319987 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.319991 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.319994 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320003 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320007 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320011 | orchestrator | 2026-01-07 00:54:59.320014 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.320019 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.622) 0:01:02.691 ***** 2026-01-07 00:54:59.320022 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320026 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320030 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320033 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320037 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320041 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320046 | orchestrator | 2026-01-07 00:54:59.320123 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.320132 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.669) 0:01:03.361 ***** 2026-01-07 00:54:59.320139 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320145 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320151 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320157 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320192 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320201 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320207 | orchestrator | 2026-01-07 00:54:59.320214 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.320222 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.849) 0:01:04.210 ***** 2026-01-07 00:54:59.320228 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320235 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320241 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320247 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.320254 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.320260 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.320264 | orchestrator | 2026-01-07 00:54:59.320268 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.320272 | orchestrator | Wednesday 07 January 2026 00:45:26 +0000 (0:00:00.789) 0:01:04.999 ***** 2026-01-07 00:54:59.320276 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.320279 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.320283 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.320287 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.320291 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.320295 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.320298 | orchestrator | 2026-01-07 00:54:59.320302 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.320306 | orchestrator | Wednesday 07 January 2026 00:45:27 +0000 (0:00:01.258) 0:01:06.258 ***** 2026-01-07 00:54:59.320310 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.320314 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.320318 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.320322 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.320332 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.320336 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.320340 | orchestrator | 2026-01-07 00:54:59.320344 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-07 00:54:59.320348 | orchestrator | Wednesday 07 January 2026 00:45:29 +0000 (0:00:01.380) 0:01:07.638 ***** 2026-01-07 00:54:59.320354 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.320360 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.320366 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.320372 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.320378 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.320384 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.320390 | orchestrator | 2026-01-07 00:54:59.320396 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-07 00:54:59.320410 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:01.516) 0:01:09.155 ***** 2026-01-07 00:54:59.320418 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.320426 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.320432 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.320438 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.320445 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.320451 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.320457 | orchestrator | 2026-01-07 00:54:59.320464 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-07 00:54:59.320470 | orchestrator | Wednesday 07 January 2026 00:45:32 +0000 (0:00:02.161) 0:01:11.316 ***** 2026-01-07 00:54:59.320477 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.320481 | orchestrator | 2026-01-07 00:54:59.320485 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-07 00:54:59.320489 | orchestrator | Wednesday 07 January 2026 00:45:33 +0000 (0:00:00.953) 0:01:12.270 ***** 2026-01-07 00:54:59.320493 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320526 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320533 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320540 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320547 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320553 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320558 | orchestrator | 2026-01-07 00:54:59.320565 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-07 00:54:59.320571 | orchestrator | Wednesday 07 January 2026 00:45:34 +0000 (0:00:00.638) 0:01:12.908 ***** 2026-01-07 00:54:59.320578 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320582 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320586 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320590 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320594 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320598 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320602 | orchestrator | 2026-01-07 00:54:59.320605 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-07 00:54:59.320610 | orchestrator | Wednesday 07 January 2026 00:45:35 +0000 (0:00:00.589) 0:01:13.497 ***** 2026-01-07 00:54:59.320613 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320618 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320622 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320625 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320629 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320633 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:54:59.320637 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320641 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320645 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320649 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320682 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320687 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:54:59.320690 | orchestrator | 2026-01-07 00:54:59.320694 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-07 00:54:59.320706 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:01.219) 0:01:14.716 ***** 2026-01-07 00:54:59.320710 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.320714 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.320718 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.320722 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.320726 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.320730 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.320733 | orchestrator | 2026-01-07 00:54:59.320738 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-07 00:54:59.320741 | orchestrator | Wednesday 07 January 2026 00:45:37 +0000 (0:00:01.030) 0:01:15.747 ***** 2026-01-07 00:54:59.320745 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320749 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320753 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320757 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320760 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320764 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320768 | orchestrator | 2026-01-07 00:54:59.320772 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-07 00:54:59.320776 | orchestrator | Wednesday 07 January 2026 00:45:37 +0000 (0:00:00.549) 0:01:16.297 ***** 2026-01-07 00:54:59.320785 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320789 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320792 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320796 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320800 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320803 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320807 | orchestrator | 2026-01-07 00:54:59.320811 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-07 00:54:59.320815 | orchestrator | Wednesday 07 January 2026 00:45:38 +0000 (0:00:00.601) 0:01:16.899 ***** 2026-01-07 00:54:59.320819 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320823 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320826 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320830 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320835 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320838 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.320842 | orchestrator | 2026-01-07 00:54:59.320846 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-07 00:54:59.320850 | orchestrator | Wednesday 07 January 2026 00:45:38 +0000 (0:00:00.447) 0:01:17.347 ***** 2026-01-07 00:54:59.320854 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.320857 | orchestrator | 2026-01-07 00:54:59.320861 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-07 00:54:59.320865 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.946) 0:01:18.293 ***** 2026-01-07 00:54:59.320869 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.320873 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.320876 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.320880 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.320884 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.320888 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.320892 | orchestrator | 2026-01-07 00:54:59.320896 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-07 00:54:59.320900 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:58.183) 0:02:16.477 ***** 2026-01-07 00:54:59.320903 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.320907 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.320911 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.320918 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.320922 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.320926 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.320930 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.320934 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.320937 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.320941 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.320945 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.320949 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.320953 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.320957 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.320960 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.320964 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.320968 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.320972 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.320975 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.320979 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.320997 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:54:59.321002 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:54:59.321006 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:54:59.321010 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321014 | orchestrator | 2026-01-07 00:54:59.321018 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-07 00:54:59.321022 | orchestrator | Wednesday 07 January 2026 00:46:38 +0000 (0:00:00.672) 0:02:17.149 ***** 2026-01-07 00:54:59.321025 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321029 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321033 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321037 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321040 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321044 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321064 | orchestrator | 2026-01-07 00:54:59.321071 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-07 00:54:59.321077 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.755) 0:02:17.904 ***** 2026-01-07 00:54:59.321084 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321088 | orchestrator | 2026-01-07 00:54:59.321092 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-07 00:54:59.321096 | orchestrator | Wednesday 07 January 2026 00:46:39 +0000 (0:00:00.164) 0:02:18.069 ***** 2026-01-07 00:54:59.321100 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321103 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321107 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321118 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321122 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321126 | orchestrator | 2026-01-07 00:54:59.321130 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-07 00:54:59.321134 | orchestrator | Wednesday 07 January 2026 00:46:40 +0000 (0:00:00.653) 0:02:18.723 ***** 2026-01-07 00:54:59.321138 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321142 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321149 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321153 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321157 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321161 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321164 | orchestrator | 2026-01-07 00:54:59.321168 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-07 00:54:59.321172 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:00.832) 0:02:19.555 ***** 2026-01-07 00:54:59.321176 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321179 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321183 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321187 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321191 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321195 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321198 | orchestrator | 2026-01-07 00:54:59.321202 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-07 00:54:59.321206 | orchestrator | Wednesday 07 January 2026 00:46:41 +0000 (0:00:00.724) 0:02:20.280 ***** 2026-01-07 00:54:59.321210 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.321214 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.321217 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.321221 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.321225 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.321228 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.321232 | orchestrator | 2026-01-07 00:54:59.321236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-07 00:54:59.321240 | orchestrator | Wednesday 07 January 2026 00:46:44 +0000 (0:00:02.703) 0:02:22.984 ***** 2026-01-07 00:54:59.321243 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.321247 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.321251 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.321255 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.321258 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.321262 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.321266 | orchestrator | 2026-01-07 00:54:59.321269 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-07 00:54:59.321273 | orchestrator | Wednesday 07 January 2026 00:46:45 +0000 (0:00:00.566) 0:02:23.551 ***** 2026-01-07 00:54:59.321278 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.321283 | orchestrator | 2026-01-07 00:54:59.321287 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-07 00:54:59.321291 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:01.065) 0:02:24.617 ***** 2026-01-07 00:54:59.321295 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321299 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321302 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321306 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321310 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321314 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321318 | orchestrator | 2026-01-07 00:54:59.321321 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-07 00:54:59.321325 | orchestrator | Wednesday 07 January 2026 00:46:46 +0000 (0:00:00.698) 0:02:25.315 ***** 2026-01-07 00:54:59.321329 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321333 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321336 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321340 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321344 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321348 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321352 | orchestrator | 2026-01-07 00:54:59.321355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-07 00:54:59.321359 | orchestrator | Wednesday 07 January 2026 00:46:47 +0000 (0:00:00.505) 0:02:25.821 ***** 2026-01-07 00:54:59.321388 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321394 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321423 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321430 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321437 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321443 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321450 | orchestrator | 2026-01-07 00:54:59.321456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-07 00:54:59.321461 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.700) 0:02:26.522 ***** 2026-01-07 00:54:59.321468 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321475 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321479 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321482 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321486 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321490 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321493 | orchestrator | 2026-01-07 00:54:59.321497 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-07 00:54:59.321501 | orchestrator | Wednesday 07 January 2026 00:46:48 +0000 (0:00:00.707) 0:02:27.229 ***** 2026-01-07 00:54:59.321505 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321509 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321512 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321516 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321520 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321526 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321532 | orchestrator | 2026-01-07 00:54:59.321538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-07 00:54:59.321544 | orchestrator | Wednesday 07 January 2026 00:46:49 +0000 (0:00:00.715) 0:02:27.944 ***** 2026-01-07 00:54:59.321550 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321565 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321571 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321577 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321582 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321588 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321594 | orchestrator | 2026-01-07 00:54:59.321600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-07 00:54:59.321606 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:00.460) 0:02:28.405 ***** 2026-01-07 00:54:59.321610 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321614 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321618 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321622 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321626 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321629 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321633 | orchestrator | 2026-01-07 00:54:59.321637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-07 00:54:59.321641 | orchestrator | Wednesday 07 January 2026 00:46:50 +0000 (0:00:00.724) 0:02:29.129 ***** 2026-01-07 00:54:59.321645 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.321648 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.321652 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.321656 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.321659 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.321663 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.321667 | orchestrator | 2026-01-07 00:54:59.321671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-07 00:54:59.321675 | orchestrator | Wednesday 07 January 2026 00:46:51 +0000 (0:00:00.580) 0:02:29.710 ***** 2026-01-07 00:54:59.321679 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.321682 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.321691 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.321695 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.321699 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.321703 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.321707 | orchestrator | 2026-01-07 00:54:59.321711 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-07 00:54:59.321715 | orchestrator | Wednesday 07 January 2026 00:46:52 +0000 (0:00:01.211) 0:02:30.921 ***** 2026-01-07 00:54:59.321719 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.321723 | orchestrator | 2026-01-07 00:54:59.321727 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-07 00:54:59.321730 | orchestrator | Wednesday 07 January 2026 00:46:53 +0000 (0:00:01.193) 0:02:32.115 ***** 2026-01-07 00:54:59.321735 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-07 00:54:59.321738 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-07 00:54:59.321742 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-07 00:54:59.321746 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321750 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-07 00:54:59.321754 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321758 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-07 00:54:59.321761 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321765 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-07 00:54:59.321769 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321781 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321784 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321788 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321792 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-07 00:54:59.321796 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321799 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321845 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321861 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321867 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-07 00:54:59.321873 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321879 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321886 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321893 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321900 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-07 00:54:59.321904 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321908 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321912 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321916 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321920 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321934 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-07 00:54:59.321937 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321945 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321952 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.321956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321960 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-07 00:54:59.321964 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.321967 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.321971 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.321979 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-07 00:54:59.321983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.321987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.321990 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.321994 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.321998 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.322006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:54:59.322009 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322062 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322068 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.322080 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:54:59.322084 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322087 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322091 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322099 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:54:59.322106 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322110 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322118 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:54:59.322130 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322133 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322145 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:54:59.322177 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322185 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322188 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-07 00:54:59.322192 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322196 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:54:59.322200 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-07 00:54:59.322204 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-07 00:54:59.322207 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-07 00:54:59.322211 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-07 00:54:59.322215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322219 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:54:59.322223 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-07 00:54:59.322226 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-07 00:54:59.322230 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-07 00:54:59.322238 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-07 00:54:59.322242 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-07 00:54:59.322246 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-07 00:54:59.322250 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-07 00:54:59.322254 | orchestrator | 2026-01-07 00:54:59.322257 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-07 00:54:59.322261 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:06.752) 0:02:38.868 ***** 2026-01-07 00:54:59.322265 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322269 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322273 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322277 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.322282 | orchestrator | 2026-01-07 00:54:59.322288 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-07 00:54:59.322294 | orchestrator | Wednesday 07 January 2026 00:47:01 +0000 (0:00:01.041) 0:02:39.909 ***** 2026-01-07 00:54:59.322299 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322306 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322312 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322318 | orchestrator | 2026-01-07 00:54:59.322324 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-07 00:54:59.322329 | orchestrator | Wednesday 07 January 2026 00:47:02 +0000 (0:00:01.013) 0:02:40.922 ***** 2026-01-07 00:54:59.322336 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322342 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322355 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322361 | orchestrator | 2026-01-07 00:54:59.322367 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-07 00:54:59.322373 | orchestrator | Wednesday 07 January 2026 00:47:03 +0000 (0:00:01.264) 0:02:42.187 ***** 2026-01-07 00:54:59.322379 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.322385 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.322391 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.322397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322403 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322409 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322415 | orchestrator | 2026-01-07 00:54:59.322421 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-07 00:54:59.322428 | orchestrator | Wednesday 07 January 2026 00:47:04 +0000 (0:00:00.525) 0:02:42.712 ***** 2026-01-07 00:54:59.322435 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.322440 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.322447 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.322452 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322455 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322459 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322463 | orchestrator | 2026-01-07 00:54:59.322467 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-07 00:54:59.322471 | orchestrator | Wednesday 07 January 2026 00:47:05 +0000 (0:00:00.692) 0:02:43.404 ***** 2026-01-07 00:54:59.322474 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322478 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322482 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322486 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322490 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322493 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322497 | orchestrator | 2026-01-07 00:54:59.322519 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-07 00:54:59.322525 | orchestrator | Wednesday 07 January 2026 00:47:05 +0000 (0:00:00.509) 0:02:43.914 ***** 2026-01-07 00:54:59.322532 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322538 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322544 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322550 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322556 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322563 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322570 | orchestrator | 2026-01-07 00:54:59.322576 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-07 00:54:59.322582 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.653) 0:02:44.568 ***** 2026-01-07 00:54:59.322589 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322595 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322599 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322603 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322607 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322611 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322614 | orchestrator | 2026-01-07 00:54:59.322618 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-07 00:54:59.322622 | orchestrator | Wednesday 07 January 2026 00:47:06 +0000 (0:00:00.524) 0:02:45.092 ***** 2026-01-07 00:54:59.322626 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322630 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322634 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322638 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322641 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322645 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322653 | orchestrator | 2026-01-07 00:54:59.322662 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-07 00:54:59.322666 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.545) 0:02:45.638 ***** 2026-01-07 00:54:59.322670 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322674 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322678 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322681 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322685 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322689 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322693 | orchestrator | 2026-01-07 00:54:59.322697 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-07 00:54:59.322701 | orchestrator | Wednesday 07 January 2026 00:47:07 +0000 (0:00:00.505) 0:02:46.143 ***** 2026-01-07 00:54:59.322704 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322708 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322712 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322716 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322719 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322723 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322727 | orchestrator | 2026-01-07 00:54:59.322731 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-07 00:54:59.322735 | orchestrator | Wednesday 07 January 2026 00:47:08 +0000 (0:00:00.678) 0:02:46.821 ***** 2026-01-07 00:54:59.322739 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322742 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322746 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322750 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.322754 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.322757 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.322761 | orchestrator | 2026-01-07 00:54:59.322765 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-07 00:54:59.322769 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:03.135) 0:02:49.956 ***** 2026-01-07 00:54:59.322773 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.322777 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.322781 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.322784 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322788 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322792 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322796 | orchestrator | 2026-01-07 00:54:59.322799 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-07 00:54:59.322803 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.752) 0:02:50.709 ***** 2026-01-07 00:54:59.322807 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.322811 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.322814 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322818 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.322822 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322826 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322829 | orchestrator | 2026-01-07 00:54:59.322833 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-07 00:54:59.322837 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:00.498) 0:02:51.208 ***** 2026-01-07 00:54:59.322841 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322845 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322848 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322852 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322856 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322860 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322863 | orchestrator | 2026-01-07 00:54:59.322867 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-07 00:54:59.322871 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.631) 0:02:51.840 ***** 2026-01-07 00:54:59.322879 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322883 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322887 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.322891 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322911 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322919 | orchestrator | 2026-01-07 00:54:59.322923 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-07 00:54:59.322927 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.683) 0:02:52.523 ***** 2026-01-07 00:54:59.322933 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-07 00:54:59.322939 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-07 00:54:59.322944 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.322950 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-07 00:54:59.322955 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-07 00:54:59.322959 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.322963 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-07 00:54:59.322967 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-07 00:54:59.322971 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.322975 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.322978 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.322982 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.322986 | orchestrator | 2026-01-07 00:54:59.322990 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-07 00:54:59.322994 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.752) 0:02:53.276 ***** 2026-01-07 00:54:59.322997 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323001 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323005 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323009 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323013 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323017 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323125 | orchestrator | 2026-01-07 00:54:59.323148 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-07 00:54:59.323153 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.636) 0:02:53.913 ***** 2026-01-07 00:54:59.323157 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323161 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323164 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323168 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323172 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323176 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323179 | orchestrator | 2026-01-07 00:54:59.323183 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:54:59.323188 | orchestrator | Wednesday 07 January 2026 00:47:16 +0000 (0:00:00.523) 0:02:54.436 ***** 2026-01-07 00:54:59.323192 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323195 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323199 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323203 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323207 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323210 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323214 | orchestrator | 2026-01-07 00:54:59.323218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:54:59.323222 | orchestrator | Wednesday 07 January 2026 00:47:16 +0000 (0:00:00.466) 0:02:54.902 ***** 2026-01-07 00:54:59.323226 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323230 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323233 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323237 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323241 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323245 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323249 | orchestrator | 2026-01-07 00:54:59.323253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:54:59.323276 | orchestrator | Wednesday 07 January 2026 00:47:17 +0000 (0:00:00.750) 0:02:55.653 ***** 2026-01-07 00:54:59.323281 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323285 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323288 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323292 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323296 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323300 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323303 | orchestrator | 2026-01-07 00:54:59.323307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:54:59.323311 | orchestrator | Wednesday 07 January 2026 00:47:17 +0000 (0:00:00.627) 0:02:56.281 ***** 2026-01-07 00:54:59.323315 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.323321 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.323326 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323332 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.323337 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323343 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323349 | orchestrator | 2026-01-07 00:54:59.323356 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:54:59.323362 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:01.094) 0:02:57.376 ***** 2026-01-07 00:54:59.323368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.323374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.323380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.323386 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323392 | orchestrator | 2026-01-07 00:54:59.323404 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:54:59.323411 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:00.570) 0:02:57.946 ***** 2026-01-07 00:54:59.323422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.323425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.323429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.323433 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323437 | orchestrator | 2026-01-07 00:54:59.323440 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:54:59.323444 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:00.363) 0:02:58.309 ***** 2026-01-07 00:54:59.323448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.323451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.323455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.323459 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323463 | orchestrator | 2026-01-07 00:54:59.323466 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:54:59.323470 | orchestrator | Wednesday 07 January 2026 00:47:20 +0000 (0:00:00.455) 0:02:58.765 ***** 2026-01-07 00:54:59.323474 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.323478 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.323481 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.323485 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323493 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323497 | orchestrator | 2026-01-07 00:54:59.323501 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:54:59.323504 | orchestrator | Wednesday 07 January 2026 00:47:21 +0000 (0:00:00.856) 0:02:59.622 ***** 2026-01-07 00:54:59.323508 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:54:59.323514 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:54:59.323520 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:54:59.323526 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-07 00:54:59.323533 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323539 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-07 00:54:59.323545 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323551 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-07 00:54:59.323558 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323563 | orchestrator | 2026-01-07 00:54:59.323569 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-07 00:54:59.323575 | orchestrator | Wednesday 07 January 2026 00:47:23 +0000 (0:00:02.645) 0:03:02.267 ***** 2026-01-07 00:54:59.323579 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.323583 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.323587 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.323590 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.323594 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.323598 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.323601 | orchestrator | 2026-01-07 00:54:59.323605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.323609 | orchestrator | Wednesday 07 January 2026 00:47:26 +0000 (0:00:02.304) 0:03:04.571 ***** 2026-01-07 00:54:59.323613 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.323616 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.323620 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.323624 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.323628 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.323631 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.323635 | orchestrator | 2026-01-07 00:54:59.323639 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:54:59.323643 | orchestrator | Wednesday 07 January 2026 00:47:27 +0000 (0:00:01.057) 0:03:05.629 ***** 2026-01-07 00:54:59.323652 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323656 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323659 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323664 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.323668 | orchestrator | 2026-01-07 00:54:59.323672 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:54:59.323697 | orchestrator | Wednesday 07 January 2026 00:47:28 +0000 (0:00:00.889) 0:03:06.518 ***** 2026-01-07 00:54:59.323702 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.323706 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.323709 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.323713 | orchestrator | 2026-01-07 00:54:59.323717 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:54:59.323721 | orchestrator | Wednesday 07 January 2026 00:47:28 +0000 (0:00:00.275) 0:03:06.794 ***** 2026-01-07 00:54:59.323725 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.323729 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.323733 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.323736 | orchestrator | 2026-01-07 00:54:59.323740 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:54:59.323744 | orchestrator | Wednesday 07 January 2026 00:47:29 +0000 (0:00:01.181) 0:03:07.976 ***** 2026-01-07 00:54:59.323748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:54:59.323752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:54:59.323756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:54:59.323760 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323763 | orchestrator | 2026-01-07 00:54:59.323767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:54:59.323771 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.854) 0:03:08.830 ***** 2026-01-07 00:54:59.323775 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.323779 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.323782 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.323786 | orchestrator | 2026-01-07 00:54:59.323790 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:54:59.323799 | orchestrator | Wednesday 07 January 2026 00:47:30 +0000 (0:00:00.249) 0:03:09.080 ***** 2026-01-07 00:54:59.323803 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.323807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.323811 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.323815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.323819 | orchestrator | 2026-01-07 00:54:59.323823 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:54:59.323826 | orchestrator | Wednesday 07 January 2026 00:47:31 +0000 (0:00:00.984) 0:03:10.064 ***** 2026-01-07 00:54:59.323830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.323834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.323838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.323842 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323845 | orchestrator | 2026-01-07 00:54:59.323849 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:54:59.323853 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:00.368) 0:03:10.432 ***** 2026-01-07 00:54:59.323857 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323860 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323864 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323868 | orchestrator | 2026-01-07 00:54:59.323872 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:54:59.323875 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:00.287) 0:03:10.720 ***** 2026-01-07 00:54:59.323884 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323888 | orchestrator | 2026-01-07 00:54:59.323892 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:54:59.323895 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:00.186) 0:03:10.906 ***** 2026-01-07 00:54:59.323899 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323903 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.323907 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.323911 | orchestrator | 2026-01-07 00:54:59.323914 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:54:59.323918 | orchestrator | Wednesday 07 January 2026 00:47:32 +0000 (0:00:00.318) 0:03:11.225 ***** 2026-01-07 00:54:59.323922 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323926 | orchestrator | 2026-01-07 00:54:59.323930 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:54:59.323933 | orchestrator | Wednesday 07 January 2026 00:47:33 +0000 (0:00:00.259) 0:03:11.484 ***** 2026-01-07 00:54:59.323937 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323941 | orchestrator | 2026-01-07 00:54:59.323945 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:54:59.323948 | orchestrator | Wednesday 07 January 2026 00:47:33 +0000 (0:00:00.207) 0:03:11.692 ***** 2026-01-07 00:54:59.323952 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323956 | orchestrator | 2026-01-07 00:54:59.323960 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:54:59.323963 | orchestrator | Wednesday 07 January 2026 00:47:33 +0000 (0:00:00.113) 0:03:11.806 ***** 2026-01-07 00:54:59.323967 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323971 | orchestrator | 2026-01-07 00:54:59.323975 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:54:59.323978 | orchestrator | Wednesday 07 January 2026 00:47:34 +0000 (0:00:00.657) 0:03:12.463 ***** 2026-01-07 00:54:59.323982 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.323986 | orchestrator | 2026-01-07 00:54:59.323990 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:54:59.323993 | orchestrator | Wednesday 07 January 2026 00:47:34 +0000 (0:00:00.224) 0:03:12.688 ***** 2026-01-07 00:54:59.323997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.324001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.324005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.324008 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324012 | orchestrator | 2026-01-07 00:54:59.324016 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:54:59.324034 | orchestrator | Wednesday 07 January 2026 00:47:34 +0000 (0:00:00.385) 0:03:13.073 ***** 2026-01-07 00:54:59.324038 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324042 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.324046 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.324070 | orchestrator | 2026-01-07 00:54:59.324074 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:54:59.324078 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:00.287) 0:03:13.360 ***** 2026-01-07 00:54:59.324082 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324086 | orchestrator | 2026-01-07 00:54:59.324089 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:54:59.324093 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:00.196) 0:03:13.557 ***** 2026-01-07 00:54:59.324097 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324101 | orchestrator | 2026-01-07 00:54:59.324104 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:54:59.324108 | orchestrator | Wednesday 07 January 2026 00:47:35 +0000 (0:00:00.222) 0:03:13.780 ***** 2026-01-07 00:54:59.324116 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324120 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324124 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.324131 | orchestrator | 2026-01-07 00:54:59.324135 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:54:59.324139 | orchestrator | Wednesday 07 January 2026 00:47:36 +0000 (0:00:00.863) 0:03:14.644 ***** 2026-01-07 00:54:59.324146 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.324150 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.324154 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.324158 | orchestrator | 2026-01-07 00:54:59.324161 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:54:59.324165 | orchestrator | Wednesday 07 January 2026 00:47:36 +0000 (0:00:00.306) 0:03:14.950 ***** 2026-01-07 00:54:59.324169 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.324173 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.324176 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.324180 | orchestrator | 2026-01-07 00:54:59.324184 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:54:59.324188 | orchestrator | Wednesday 07 January 2026 00:47:37 +0000 (0:00:01.364) 0:03:16.315 ***** 2026-01-07 00:54:59.324192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.324195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.324199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.324203 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324207 | orchestrator | 2026-01-07 00:54:59.324211 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:54:59.324214 | orchestrator | Wednesday 07 January 2026 00:47:38 +0000 (0:00:00.811) 0:03:17.127 ***** 2026-01-07 00:54:59.324218 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.324222 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.324226 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.324229 | orchestrator | 2026-01-07 00:54:59.324233 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:54:59.324237 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:00.539) 0:03:17.666 ***** 2026-01-07 00:54:59.324241 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324245 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324248 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.324256 | orchestrator | 2026-01-07 00:54:59.324260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:54:59.324264 | orchestrator | Wednesday 07 January 2026 00:47:40 +0000 (0:00:00.831) 0:03:18.498 ***** 2026-01-07 00:54:59.324267 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.324271 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.324275 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.324279 | orchestrator | 2026-01-07 00:54:59.324283 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:54:59.324286 | orchestrator | Wednesday 07 January 2026 00:47:40 +0000 (0:00:00.509) 0:03:19.008 ***** 2026-01-07 00:54:59.324290 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.324294 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.324298 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.324301 | orchestrator | 2026-01-07 00:54:59.324305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:54:59.324309 | orchestrator | Wednesday 07 January 2026 00:47:41 +0000 (0:00:01.192) 0:03:20.200 ***** 2026-01-07 00:54:59.324313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.324321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.324324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.324328 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324332 | orchestrator | 2026-01-07 00:54:59.324336 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:54:59.324339 | orchestrator | Wednesday 07 January 2026 00:47:42 +0000 (0:00:00.578) 0:03:20.779 ***** 2026-01-07 00:54:59.324343 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.324347 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.324351 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.324354 | orchestrator | 2026-01-07 00:54:59.324358 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-07 00:54:59.324362 | orchestrator | Wednesday 07 January 2026 00:47:42 +0000 (0:00:00.307) 0:03:21.087 ***** 2026-01-07 00:54:59.324366 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324370 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.324374 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.324377 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324401 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324405 | orchestrator | 2026-01-07 00:54:59.324409 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:54:59.324413 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:00.824) 0:03:21.911 ***** 2026-01-07 00:54:59.324417 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.324420 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.324424 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.324428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.324432 | orchestrator | 2026-01-07 00:54:59.324436 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:54:59.324439 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:00.791) 0:03:22.702 ***** 2026-01-07 00:54:59.324443 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324447 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324451 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324455 | orchestrator | 2026-01-07 00:54:59.324458 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:54:59.324462 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:00.541) 0:03:23.244 ***** 2026-01-07 00:54:59.324466 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.324470 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.324474 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.324477 | orchestrator | 2026-01-07 00:54:59.324481 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:54:59.324485 | orchestrator | Wednesday 07 January 2026 00:47:46 +0000 (0:00:01.318) 0:03:24.563 ***** 2026-01-07 00:54:59.324492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:54:59.324496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:54:59.324500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:54:59.324503 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324507 | orchestrator | 2026-01-07 00:54:59.324511 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:54:59.324515 | orchestrator | Wednesday 07 January 2026 00:47:46 +0000 (0:00:00.641) 0:03:25.205 ***** 2026-01-07 00:54:59.324519 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324522 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324526 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324530 | orchestrator | 2026-01-07 00:54:59.324534 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-07 00:54:59.324537 | orchestrator | 2026-01-07 00:54:59.324541 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.324549 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:00.699) 0:03:25.904 ***** 2026-01-07 00:54:59.324552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.324556 | orchestrator | 2026-01-07 00:54:59.324560 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.324564 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:00.604) 0:03:26.508 ***** 2026-01-07 00:54:59.324568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.324571 | orchestrator | 2026-01-07 00:54:59.324575 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.324579 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:00.440) 0:03:26.949 ***** 2026-01-07 00:54:59.324583 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324586 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324590 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324594 | orchestrator | 2026-01-07 00:54:59.324598 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.324601 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:00.901) 0:03:27.851 ***** 2026-01-07 00:54:59.324605 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324609 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324613 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324616 | orchestrator | 2026-01-07 00:54:59.324620 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.324624 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:00.261) 0:03:28.113 ***** 2026-01-07 00:54:59.324628 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324631 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324635 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324639 | orchestrator | 2026-01-07 00:54:59.324643 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.324646 | orchestrator | Wednesday 07 January 2026 00:47:50 +0000 (0:00:00.250) 0:03:28.363 ***** 2026-01-07 00:54:59.324650 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324654 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324658 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324661 | orchestrator | 2026-01-07 00:54:59.324665 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.324669 | orchestrator | Wednesday 07 January 2026 00:47:50 +0000 (0:00:00.273) 0:03:28.637 ***** 2026-01-07 00:54:59.324673 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324676 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324682 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324688 | orchestrator | 2026-01-07 00:54:59.324694 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.324701 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:00.761) 0:03:29.398 ***** 2026-01-07 00:54:59.324704 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324708 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324712 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324716 | orchestrator | 2026-01-07 00:54:59.324719 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.324723 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:00.279) 0:03:29.678 ***** 2026-01-07 00:54:59.324742 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324746 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324750 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324754 | orchestrator | 2026-01-07 00:54:59.324758 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.324761 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:00.267) 0:03:29.946 ***** 2026-01-07 00:54:59.324769 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324772 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324776 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324780 | orchestrator | 2026-01-07 00:54:59.324784 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.324788 | orchestrator | Wednesday 07 January 2026 00:47:52 +0000 (0:00:00.722) 0:03:30.668 ***** 2026-01-07 00:54:59.324791 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324795 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324799 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324803 | orchestrator | 2026-01-07 00:54:59.324806 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.324810 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:00.885) 0:03:31.553 ***** 2026-01-07 00:54:59.324814 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324818 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324822 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324825 | orchestrator | 2026-01-07 00:54:59.324829 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.324833 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:00.338) 0:03:31.892 ***** 2026-01-07 00:54:59.324837 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324841 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324848 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324852 | orchestrator | 2026-01-07 00:54:59.324856 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.324860 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:00.323) 0:03:32.215 ***** 2026-01-07 00:54:59.324863 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324867 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324871 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324875 | orchestrator | 2026-01-07 00:54:59.324878 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.324882 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:00.270) 0:03:32.485 ***** 2026-01-07 00:54:59.324886 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324890 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324894 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324897 | orchestrator | 2026-01-07 00:54:59.324901 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.324905 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:00.252) 0:03:32.738 ***** 2026-01-07 00:54:59.324909 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324912 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324916 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324920 | orchestrator | 2026-01-07 00:54:59.324924 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.324928 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:00.451) 0:03:33.189 ***** 2026-01-07 00:54:59.324932 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324936 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324940 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324944 | orchestrator | 2026-01-07 00:54:59.324947 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.324951 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.260) 0:03:33.449 ***** 2026-01-07 00:54:59.324955 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.324959 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.324962 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.324966 | orchestrator | 2026-01-07 00:54:59.324970 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.324974 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.271) 0:03:33.721 ***** 2026-01-07 00:54:59.324978 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.324982 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.324989 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.324993 | orchestrator | 2026-01-07 00:54:59.324996 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.325000 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:00.317) 0:03:34.039 ***** 2026-01-07 00:54:59.325004 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325008 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325011 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325015 | orchestrator | 2026-01-07 00:54:59.325019 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.325023 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:00.457) 0:03:34.496 ***** 2026-01-07 00:54:59.325026 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325030 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325034 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325038 | orchestrator | 2026-01-07 00:54:59.325042 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:54:59.325045 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:00.472) 0:03:34.969 ***** 2026-01-07 00:54:59.325062 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325066 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325070 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325074 | orchestrator | 2026-01-07 00:54:59.325077 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-07 00:54:59.325081 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:00.276) 0:03:35.246 ***** 2026-01-07 00:54:59.325085 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.325089 | orchestrator | 2026-01-07 00:54:59.325093 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-07 00:54:59.325097 | orchestrator | Wednesday 07 January 2026 00:47:57 +0000 (0:00:00.637) 0:03:35.883 ***** 2026-01-07 00:54:59.325101 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.325104 | orchestrator | 2026-01-07 00:54:59.325122 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-07 00:54:59.325127 | orchestrator | Wednesday 07 January 2026 00:47:57 +0000 (0:00:00.158) 0:03:36.041 ***** 2026-01-07 00:54:59.325130 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:54:59.325134 | orchestrator | 2026-01-07 00:54:59.325138 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-07 00:54:59.325142 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:00.942) 0:03:36.984 ***** 2026-01-07 00:54:59.325146 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325150 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325153 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325157 | orchestrator | 2026-01-07 00:54:59.325161 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-07 00:54:59.325165 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:00.279) 0:03:37.263 ***** 2026-01-07 00:54:59.325168 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325172 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325176 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325180 | orchestrator | 2026-01-07 00:54:59.325183 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-07 00:54:59.325187 | orchestrator | Wednesday 07 January 2026 00:47:59 +0000 (0:00:00.260) 0:03:37.524 ***** 2026-01-07 00:54:59.325191 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325195 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325199 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325203 | orchestrator | 2026-01-07 00:54:59.325206 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-07 00:54:59.325210 | orchestrator | Wednesday 07 January 2026 00:48:00 +0000 (0:00:01.340) 0:03:38.864 ***** 2026-01-07 00:54:59.325217 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325221 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325228 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325232 | orchestrator | 2026-01-07 00:54:59.325236 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-07 00:54:59.325239 | orchestrator | Wednesday 07 January 2026 00:48:01 +0000 (0:00:00.803) 0:03:39.668 ***** 2026-01-07 00:54:59.325243 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325247 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325251 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325254 | orchestrator | 2026-01-07 00:54:59.325258 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-07 00:54:59.325262 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:00.838) 0:03:40.507 ***** 2026-01-07 00:54:59.325266 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325269 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325273 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325277 | orchestrator | 2026-01-07 00:54:59.325281 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-07 00:54:59.325285 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:00.821) 0:03:41.329 ***** 2026-01-07 00:54:59.325288 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325292 | orchestrator | 2026-01-07 00:54:59.325296 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-07 00:54:59.325300 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:01.390) 0:03:42.719 ***** 2026-01-07 00:54:59.325304 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325309 | orchestrator | 2026-01-07 00:54:59.325314 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-07 00:54:59.325321 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:01.128) 0:03:43.847 ***** 2026-01-07 00:54:59.325327 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.325332 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.325338 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.325345 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:54:59.325351 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-07 00:54:59.325357 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:54:59.325362 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:54:59.325369 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-07 00:54:59.325375 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:54:59.325381 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-07 00:54:59.325387 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-07 00:54:59.325394 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-07 00:54:59.325399 | orchestrator | 2026-01-07 00:54:59.325406 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-07 00:54:59.325412 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:02.974) 0:03:46.822 ***** 2026-01-07 00:54:59.325418 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325424 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325429 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325433 | orchestrator | 2026-01-07 00:54:59.325437 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-07 00:54:59.325441 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:01.241) 0:03:48.063 ***** 2026-01-07 00:54:59.325444 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325448 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325452 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325456 | orchestrator | 2026-01-07 00:54:59.325459 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-07 00:54:59.325463 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:00.243) 0:03:48.307 ***** 2026-01-07 00:54:59.325473 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325477 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325480 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325484 | orchestrator | 2026-01-07 00:54:59.325488 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-07 00:54:59.325492 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:00.443) 0:03:48.751 ***** 2026-01-07 00:54:59.325496 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325518 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325522 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325526 | orchestrator | 2026-01-07 00:54:59.325530 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-07 00:54:59.325534 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:01.501) 0:03:50.252 ***** 2026-01-07 00:54:59.325537 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325541 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325545 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325549 | orchestrator | 2026-01-07 00:54:59.325553 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-07 00:54:59.325556 | orchestrator | Wednesday 07 January 2026 00:48:13 +0000 (0:00:01.332) 0:03:51.584 ***** 2026-01-07 00:54:59.325560 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.325564 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.325568 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.325571 | orchestrator | 2026-01-07 00:54:59.325575 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-07 00:54:59.325579 | orchestrator | Wednesday 07 January 2026 00:48:13 +0000 (0:00:00.305) 0:03:51.889 ***** 2026-01-07 00:54:59.325583 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.325587 | orchestrator | 2026-01-07 00:54:59.325590 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-07 00:54:59.325594 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:00.761) 0:03:52.650 ***** 2026-01-07 00:54:59.325598 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.325608 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.325614 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.325620 | orchestrator | 2026-01-07 00:54:59.325626 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-07 00:54:59.325632 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:00.544) 0:03:53.194 ***** 2026-01-07 00:54:59.325638 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.325644 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.325650 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.325656 | orchestrator | 2026-01-07 00:54:59.325662 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-07 00:54:59.325668 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:00.347) 0:03:53.542 ***** 2026-01-07 00:54:59.325673 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.325679 | orchestrator | 2026-01-07 00:54:59.325685 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-07 00:54:59.325691 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:00.719) 0:03:54.261 ***** 2026-01-07 00:54:59.325697 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325704 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325708 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325713 | orchestrator | 2026-01-07 00:54:59.325718 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-07 00:54:59.325724 | orchestrator | Wednesday 07 January 2026 00:48:17 +0000 (0:00:01.911) 0:03:56.173 ***** 2026-01-07 00:54:59.325730 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325736 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325749 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325760 | orchestrator | 2026-01-07 00:54:59.325771 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-07 00:54:59.325777 | orchestrator | Wednesday 07 January 2026 00:48:19 +0000 (0:00:01.944) 0:03:58.117 ***** 2026-01-07 00:54:59.325783 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325789 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325794 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325800 | orchestrator | 2026-01-07 00:54:59.325805 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-07 00:54:59.325810 | orchestrator | Wednesday 07 January 2026 00:48:21 +0000 (0:00:02.100) 0:04:00.217 ***** 2026-01-07 00:54:59.325816 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.325822 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.325828 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.325834 | orchestrator | 2026-01-07 00:54:59.325839 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-07 00:54:59.325846 | orchestrator | Wednesday 07 January 2026 00:48:24 +0000 (0:00:02.826) 0:04:03.044 ***** 2026-01-07 00:54:59.325851 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.325857 | orchestrator | 2026-01-07 00:54:59.325864 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-07 00:54:59.325870 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:00.683) 0:04:03.728 ***** 2026-01-07 00:54:59.325876 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-07 00:54:59.325882 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325888 | orchestrator | 2026-01-07 00:54:59.325893 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-07 00:54:59.325899 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:22.006) 0:04:25.734 ***** 2026-01-07 00:54:59.325907 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.325912 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.325915 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.325919 | orchestrator | 2026-01-07 00:54:59.325923 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-07 00:54:59.325927 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:08.891) 0:04:34.626 ***** 2026-01-07 00:54:59.325931 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.325935 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.325939 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.325943 | orchestrator | 2026-01-07 00:54:59.325947 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-07 00:54:59.325975 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:00.370) 0:04:34.996 ***** 2026-01-07 00:54:59.325983 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:54:59.325989 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:54:59.326003 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-07 00:54:59.326064 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-07 00:54:59.326076 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-07 00:54:59.326086 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__61b3fe527e3795a53c55bc93e6f58219ccf4d6ad'}])  2026-01-07 00:54:59.326094 | orchestrator | 2026-01-07 00:54:59.326099 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.326105 | orchestrator | Wednesday 07 January 2026 00:49:12 +0000 (0:00:15.750) 0:04:50.747 ***** 2026-01-07 00:54:59.326111 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326116 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326123 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326129 | orchestrator | 2026-01-07 00:54:59.326134 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:54:59.326140 | orchestrator | Wednesday 07 January 2026 00:49:12 +0000 (0:00:00.290) 0:04:51.037 ***** 2026-01-07 00:54:59.326146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.326152 | orchestrator | 2026-01-07 00:54:59.326158 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:54:59.326163 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:00.659) 0:04:51.696 ***** 2026-01-07 00:54:59.326169 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326175 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326181 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326187 | orchestrator | 2026-01-07 00:54:59.326193 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:54:59.326199 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:00.268) 0:04:51.965 ***** 2026-01-07 00:54:59.326205 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326212 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326216 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326220 | orchestrator | 2026-01-07 00:54:59.326223 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:54:59.326227 | orchestrator | Wednesday 07 January 2026 00:49:13 +0000 (0:00:00.272) 0:04:52.238 ***** 2026-01-07 00:54:59.326231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:54:59.326235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:54:59.326238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:54:59.326242 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326246 | orchestrator | 2026-01-07 00:54:59.326250 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:54:59.326253 | orchestrator | Wednesday 07 January 2026 00:49:14 +0000 (0:00:00.732) 0:04:52.970 ***** 2026-01-07 00:54:59.326257 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326262 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326286 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326297 | orchestrator | 2026-01-07 00:54:59.326301 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-07 00:54:59.326305 | orchestrator | 2026-01-07 00:54:59.326309 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.326313 | orchestrator | Wednesday 07 January 2026 00:49:15 +0000 (0:00:00.734) 0:04:53.705 ***** 2026-01-07 00:54:59.326317 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.326321 | orchestrator | 2026-01-07 00:54:59.326325 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.326329 | orchestrator | Wednesday 07 January 2026 00:49:15 +0000 (0:00:00.464) 0:04:54.169 ***** 2026-01-07 00:54:59.326333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.326337 | orchestrator | 2026-01-07 00:54:59.326341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.326345 | orchestrator | Wednesday 07 January 2026 00:49:16 +0000 (0:00:00.604) 0:04:54.774 ***** 2026-01-07 00:54:59.326348 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326352 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326356 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326360 | orchestrator | 2026-01-07 00:54:59.326363 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.326372 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:00.605) 0:04:55.380 ***** 2026-01-07 00:54:59.326376 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326380 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326383 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326387 | orchestrator | 2026-01-07 00:54:59.326391 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.326395 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:00.221) 0:04:55.602 ***** 2026-01-07 00:54:59.326399 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326402 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326406 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326410 | orchestrator | 2026-01-07 00:54:59.326414 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.326417 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:00.406) 0:04:56.008 ***** 2026-01-07 00:54:59.326421 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326425 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326429 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326432 | orchestrator | 2026-01-07 00:54:59.326436 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.326440 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:00.282) 0:04:56.291 ***** 2026-01-07 00:54:59.326444 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326448 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326451 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326455 | orchestrator | 2026-01-07 00:54:59.326459 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.326463 | orchestrator | Wednesday 07 January 2026 00:49:18 +0000 (0:00:00.922) 0:04:57.213 ***** 2026-01-07 00:54:59.326467 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326470 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326474 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326478 | orchestrator | 2026-01-07 00:54:59.326482 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.326486 | orchestrator | Wednesday 07 January 2026 00:49:19 +0000 (0:00:00.290) 0:04:57.503 ***** 2026-01-07 00:54:59.326489 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326493 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326497 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326505 | orchestrator | 2026-01-07 00:54:59.326509 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.326513 | orchestrator | Wednesday 07 January 2026 00:49:19 +0000 (0:00:00.266) 0:04:57.770 ***** 2026-01-07 00:54:59.326516 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326520 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326524 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326530 | orchestrator | 2026-01-07 00:54:59.326536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.326541 | orchestrator | Wednesday 07 January 2026 00:49:20 +0000 (0:00:00.855) 0:04:58.626 ***** 2026-01-07 00:54:59.326549 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326558 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326564 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326570 | orchestrator | 2026-01-07 00:54:59.326575 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.326581 | orchestrator | Wednesday 07 January 2026 00:49:20 +0000 (0:00:00.619) 0:04:59.245 ***** 2026-01-07 00:54:59.326587 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326593 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326599 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326605 | orchestrator | 2026-01-07 00:54:59.326611 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.326617 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:00.268) 0:04:59.514 ***** 2026-01-07 00:54:59.326623 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326627 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326630 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326634 | orchestrator | 2026-01-07 00:54:59.326638 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.326642 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:00.294) 0:04:59.809 ***** 2026-01-07 00:54:59.326645 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326649 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326653 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326657 | orchestrator | 2026-01-07 00:54:59.326660 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.326684 | orchestrator | Wednesday 07 January 2026 00:49:21 +0000 (0:00:00.434) 0:05:00.244 ***** 2026-01-07 00:54:59.326689 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326693 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326697 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326700 | orchestrator | 2026-01-07 00:54:59.326704 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.326708 | orchestrator | Wednesday 07 January 2026 00:49:22 +0000 (0:00:00.271) 0:05:00.515 ***** 2026-01-07 00:54:59.326712 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326716 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326720 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326724 | orchestrator | 2026-01-07 00:54:59.326728 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.326732 | orchestrator | Wednesday 07 January 2026 00:49:22 +0000 (0:00:00.274) 0:05:00.790 ***** 2026-01-07 00:54:59.326736 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326743 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326747 | orchestrator | 2026-01-07 00:54:59.326751 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.326755 | orchestrator | Wednesday 07 January 2026 00:49:22 +0000 (0:00:00.284) 0:05:01.075 ***** 2026-01-07 00:54:59.326758 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326764 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326770 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326776 | orchestrator | 2026-01-07 00:54:59.326780 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.326791 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.415) 0:05:01.491 ***** 2026-01-07 00:54:59.326798 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326802 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326806 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326810 | orchestrator | 2026-01-07 00:54:59.326813 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.326817 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.286) 0:05:01.777 ***** 2026-01-07 00:54:59.326821 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326825 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326829 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326832 | orchestrator | 2026-01-07 00:54:59.326836 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.326840 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.293) 0:05:02.071 ***** 2026-01-07 00:54:59.326844 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326848 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326851 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326855 | orchestrator | 2026-01-07 00:54:59.326859 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:54:59.326863 | orchestrator | Wednesday 07 January 2026 00:49:24 +0000 (0:00:00.607) 0:05:02.678 ***** 2026-01-07 00:54:59.326867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:54:59.326871 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.326875 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.326878 | orchestrator | 2026-01-07 00:54:59.326882 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-07 00:54:59.326886 | orchestrator | Wednesday 07 January 2026 00:49:24 +0000 (0:00:00.549) 0:05:03.227 ***** 2026-01-07 00:54:59.326890 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.326894 | orchestrator | 2026-01-07 00:54:59.326898 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-07 00:54:59.326902 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:00.483) 0:05:03.711 ***** 2026-01-07 00:54:59.326906 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.326909 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.326913 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.326917 | orchestrator | 2026-01-07 00:54:59.326921 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-07 00:54:59.326925 | orchestrator | Wednesday 07 January 2026 00:49:26 +0000 (0:00:00.884) 0:05:04.595 ***** 2026-01-07 00:54:59.326929 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.326933 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.326937 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.326941 | orchestrator | 2026-01-07 00:54:59.326945 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-07 00:54:59.326948 | orchestrator | Wednesday 07 January 2026 00:49:26 +0000 (0:00:00.442) 0:05:05.038 ***** 2026-01-07 00:54:59.326952 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.326957 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.326960 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.326964 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-07 00:54:59.326968 | orchestrator | 2026-01-07 00:54:59.326972 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-07 00:54:59.326976 | orchestrator | Wednesday 07 January 2026 00:49:36 +0000 (0:00:09.601) 0:05:14.639 ***** 2026-01-07 00:54:59.326980 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.326984 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.326988 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.326995 | orchestrator | 2026-01-07 00:54:59.326998 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-07 00:54:59.327002 | orchestrator | Wednesday 07 January 2026 00:49:36 +0000 (0:00:00.256) 0:05:14.895 ***** 2026-01-07 00:54:59.327006 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:54:59.327010 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:54:59.327014 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:54:59.327018 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.327021 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.327042 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.327046 | orchestrator | 2026-01-07 00:54:59.327089 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:54:59.327093 | orchestrator | Wednesday 07 January 2026 00:49:40 +0000 (0:00:03.570) 0:05:18.466 ***** 2026-01-07 00:54:59.327097 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:54:59.327101 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:54:59.327105 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:54:59.327109 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:54:59.327113 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:54:59.327117 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:54:59.327120 | orchestrator | 2026-01-07 00:54:59.327125 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-07 00:54:59.327128 | orchestrator | Wednesday 07 January 2026 00:49:41 +0000 (0:00:01.333) 0:05:19.800 ***** 2026-01-07 00:54:59.327132 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.327136 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.327140 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.327144 | orchestrator | 2026-01-07 00:54:59.327148 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-07 00:54:59.327152 | orchestrator | Wednesday 07 January 2026 00:49:42 +0000 (0:00:01.036) 0:05:20.837 ***** 2026-01-07 00:54:59.327155 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327159 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.327163 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.327167 | orchestrator | 2026-01-07 00:54:59.327175 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-07 00:54:59.327179 | orchestrator | Wednesday 07 January 2026 00:49:42 +0000 (0:00:00.282) 0:05:21.120 ***** 2026-01-07 00:54:59.327183 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327186 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.327190 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.327194 | orchestrator | 2026-01-07 00:54:59.327198 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-07 00:54:59.327202 | orchestrator | Wednesday 07 January 2026 00:49:42 +0000 (0:00:00.220) 0:05:21.340 ***** 2026-01-07 00:54:59.327206 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.327210 | orchestrator | 2026-01-07 00:54:59.327213 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-07 00:54:59.327217 | orchestrator | Wednesday 07 January 2026 00:49:43 +0000 (0:00:00.622) 0:05:21.963 ***** 2026-01-07 00:54:59.327221 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327225 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.327229 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.327233 | orchestrator | 2026-01-07 00:54:59.327237 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-07 00:54:59.327241 | orchestrator | Wednesday 07 January 2026 00:49:43 +0000 (0:00:00.257) 0:05:22.220 ***** 2026-01-07 00:54:59.327245 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.327249 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327261 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.327265 | orchestrator | 2026-01-07 00:54:59.327269 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-07 00:54:59.327273 | orchestrator | Wednesday 07 January 2026 00:49:44 +0000 (0:00:00.290) 0:05:22.511 ***** 2026-01-07 00:54:59.327277 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.327281 | orchestrator | 2026-01-07 00:54:59.327285 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-07 00:54:59.327289 | orchestrator | Wednesday 07 January 2026 00:49:44 +0000 (0:00:00.402) 0:05:22.914 ***** 2026-01-07 00:54:59.327292 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327296 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327300 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327304 | orchestrator | 2026-01-07 00:54:59.327308 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-07 00:54:59.327311 | orchestrator | Wednesday 07 January 2026 00:49:46 +0000 (0:00:01.600) 0:05:24.514 ***** 2026-01-07 00:54:59.327315 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327319 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327323 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327327 | orchestrator | 2026-01-07 00:54:59.327330 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-07 00:54:59.327334 | orchestrator | Wednesday 07 January 2026 00:49:47 +0000 (0:00:01.351) 0:05:25.865 ***** 2026-01-07 00:54:59.327338 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327342 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327346 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327350 | orchestrator | 2026-01-07 00:54:59.327353 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-07 00:54:59.327357 | orchestrator | Wednesday 07 January 2026 00:49:50 +0000 (0:00:02.746) 0:05:28.612 ***** 2026-01-07 00:54:59.327361 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327365 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327369 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327373 | orchestrator | 2026-01-07 00:54:59.327376 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-07 00:54:59.327380 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:01.956) 0:05:30.568 ***** 2026-01-07 00:54:59.327384 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327388 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.327392 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-07 00:54:59.327396 | orchestrator | 2026-01-07 00:54:59.327399 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-07 00:54:59.327403 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:00.753) 0:05:31.321 ***** 2026-01-07 00:54:59.327424 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-07 00:54:59.327429 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-07 00:54:59.327432 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-07 00:54:59.327436 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-07 00:54:59.327440 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-07 00:54:59.327444 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.327448 | orchestrator | 2026-01-07 00:54:59.327452 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-07 00:54:59.327455 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:30.286) 0:06:01.608 ***** 2026-01-07 00:54:59.327459 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.327466 | orchestrator | 2026-01-07 00:54:59.327470 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-07 00:54:59.327474 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:01.207) 0:06:02.815 ***** 2026-01-07 00:54:59.327478 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.327482 | orchestrator | 2026-01-07 00:54:59.327485 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-07 00:54:59.327492 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.279) 0:06:03.095 ***** 2026-01-07 00:54:59.327496 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.327500 | orchestrator | 2026-01-07 00:54:59.327504 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-07 00:54:59.327508 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.148) 0:06:03.244 ***** 2026-01-07 00:54:59.327512 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-07 00:54:59.327516 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-07 00:54:59.327519 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-07 00:54:59.327525 | orchestrator | 2026-01-07 00:54:59.327531 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-07 00:54:59.327538 | orchestrator | Wednesday 07 January 2026 00:50:31 +0000 (0:00:06.291) 0:06:09.535 ***** 2026-01-07 00:54:59.327544 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-07 00:54:59.327550 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-07 00:54:59.327558 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-07 00:54:59.327564 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-07 00:54:59.327570 | orchestrator | 2026-01-07 00:54:59.327576 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.327583 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:04.745) 0:06:14.280 ***** 2026-01-07 00:54:59.327589 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327595 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327600 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327604 | orchestrator | 2026-01-07 00:54:59.327608 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:54:59.327612 | orchestrator | Wednesday 07 January 2026 00:50:36 +0000 (0:00:00.626) 0:06:14.906 ***** 2026-01-07 00:54:59.327616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.327620 | orchestrator | 2026-01-07 00:54:59.327624 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:54:59.327627 | orchestrator | Wednesday 07 January 2026 00:50:36 +0000 (0:00:00.424) 0:06:15.331 ***** 2026-01-07 00:54:59.327631 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.327635 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.327639 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.327642 | orchestrator | 2026-01-07 00:54:59.327646 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:54:59.327650 | orchestrator | Wednesday 07 January 2026 00:50:37 +0000 (0:00:00.389) 0:06:15.721 ***** 2026-01-07 00:54:59.327654 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.327658 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.327662 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.327665 | orchestrator | 2026-01-07 00:54:59.327669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:54:59.327673 | orchestrator | Wednesday 07 January 2026 00:50:38 +0000 (0:00:01.079) 0:06:16.800 ***** 2026-01-07 00:54:59.327677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:54:59.327680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:54:59.327689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:54:59.327692 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.327696 | orchestrator | 2026-01-07 00:54:59.327700 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:54:59.327703 | orchestrator | Wednesday 07 January 2026 00:50:38 +0000 (0:00:00.538) 0:06:17.339 ***** 2026-01-07 00:54:59.327707 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.327711 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.327715 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.327719 | orchestrator | 2026-01-07 00:54:59.327722 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-07 00:54:59.327726 | orchestrator | 2026-01-07 00:54:59.327730 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.327734 | orchestrator | Wednesday 07 January 2026 00:50:39 +0000 (0:00:00.585) 0:06:17.924 ***** 2026-01-07 00:54:59.327757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.327762 | orchestrator | 2026-01-07 00:54:59.327765 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.327769 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.432) 0:06:18.357 ***** 2026-01-07 00:54:59.327773 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.327777 | orchestrator | 2026-01-07 00:54:59.327781 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.327785 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.551) 0:06:18.908 ***** 2026-01-07 00:54:59.327788 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.327792 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.327796 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.327800 | orchestrator | 2026-01-07 00:54:59.327804 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.327807 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.252) 0:06:19.161 ***** 2026-01-07 00:54:59.327811 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.327815 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.327819 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.327822 | orchestrator | 2026-01-07 00:54:59.327826 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.327830 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.649) 0:06:19.810 ***** 2026-01-07 00:54:59.327834 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.327842 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.327846 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.327849 | orchestrator | 2026-01-07 00:54:59.327853 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.327857 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:00.625) 0:06:20.435 ***** 2026-01-07 00:54:59.327861 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.327865 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.327870 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.327877 | orchestrator | 2026-01-07 00:54:59.327883 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.327892 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:00.597) 0:06:21.032 ***** 2026-01-07 00:54:59.327902 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.327908 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.327914 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.327920 | orchestrator | 2026-01-07 00:54:59.327926 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.327932 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:00.457) 0:06:21.490 ***** 2026-01-07 00:54:59.327938 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.327951 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.327957 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.327963 | orchestrator | 2026-01-07 00:54:59.327970 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.327976 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:00.297) 0:06:21.788 ***** 2026-01-07 00:54:59.327982 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.327988 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.327995 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328002 | orchestrator | 2026-01-07 00:54:59.328009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.328016 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:00.302) 0:06:22.090 ***** 2026-01-07 00:54:59.328022 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328028 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328034 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328038 | orchestrator | 2026-01-07 00:54:59.328042 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.328046 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.682) 0:06:22.773 ***** 2026-01-07 00:54:59.328066 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328070 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328073 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328077 | orchestrator | 2026-01-07 00:54:59.328081 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.328085 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:01.005) 0:06:23.778 ***** 2026-01-07 00:54:59.328088 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328092 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328096 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328099 | orchestrator | 2026-01-07 00:54:59.328103 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.328107 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:00.327) 0:06:24.105 ***** 2026-01-07 00:54:59.328111 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328114 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328118 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328122 | orchestrator | 2026-01-07 00:54:59.328126 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.328129 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:00.230) 0:06:24.336 ***** 2026-01-07 00:54:59.328133 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328137 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328141 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328144 | orchestrator | 2026-01-07 00:54:59.328148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.328152 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:00.212) 0:06:24.548 ***** 2026-01-07 00:54:59.328156 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328160 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328164 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328167 | orchestrator | 2026-01-07 00:54:59.328171 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.328175 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:00.448) 0:06:24.996 ***** 2026-01-07 00:54:59.328179 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328182 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328191 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328194 | orchestrator | 2026-01-07 00:54:59.328198 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.328202 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:00.226) 0:06:25.223 ***** 2026-01-07 00:54:59.328206 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328209 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328213 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328225 | orchestrator | 2026-01-07 00:54:59.328229 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.328233 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.205) 0:06:25.428 ***** 2026-01-07 00:54:59.328237 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328241 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328244 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328248 | orchestrator | 2026-01-07 00:54:59.328252 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.328256 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.211) 0:06:25.640 ***** 2026-01-07 00:54:59.328260 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328263 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328267 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328271 | orchestrator | 2026-01-07 00:54:59.328275 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.328278 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.356) 0:06:25.996 ***** 2026-01-07 00:54:59.328282 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328286 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328290 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328294 | orchestrator | 2026-01-07 00:54:59.328301 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.328305 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.252) 0:06:26.249 ***** 2026-01-07 00:54:59.328309 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328313 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328317 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328321 | orchestrator | 2026-01-07 00:54:59.328324 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-07 00:54:59.328328 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:00.396) 0:06:26.646 ***** 2026-01-07 00:54:59.328332 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328336 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328340 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328343 | orchestrator | 2026-01-07 00:54:59.328347 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:54:59.328351 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:00.389) 0:06:27.035 ***** 2026-01-07 00:54:59.328355 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:54:59.328359 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:54:59.328363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:54:59.328366 | orchestrator | 2026-01-07 00:54:59.328370 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-07 00:54:59.328374 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:00.501) 0:06:27.537 ***** 2026-01-07 00:54:59.328378 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.328382 | orchestrator | 2026-01-07 00:54:59.328385 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-07 00:54:59.328389 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:00.398) 0:06:27.935 ***** 2026-01-07 00:54:59.328393 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328397 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328401 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328404 | orchestrator | 2026-01-07 00:54:59.328408 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-07 00:54:59.328412 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:00.356) 0:06:28.291 ***** 2026-01-07 00:54:59.328416 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328420 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328423 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328432 | orchestrator | 2026-01-07 00:54:59.328435 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-07 00:54:59.328439 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:00.256) 0:06:28.547 ***** 2026-01-07 00:54:59.328443 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328447 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328451 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328455 | orchestrator | 2026-01-07 00:54:59.328458 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-07 00:54:59.328462 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:00.602) 0:06:29.149 ***** 2026-01-07 00:54:59.328466 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328470 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328474 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328477 | orchestrator | 2026-01-07 00:54:59.328481 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-07 00:54:59.328485 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.268) 0:06:29.418 ***** 2026-01-07 00:54:59.328489 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:54:59.328493 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:54:59.328497 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:54:59.328500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:54:59.328504 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:54:59.328515 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:54:59.328519 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:54:59.328524 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:54:59.328530 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:54:59.328538 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:54:59.328546 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:54:59.328552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:54:59.328557 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:54:59.328563 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:54:59.328570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:54:59.328576 | orchestrator | 2026-01-07 00:54:59.328582 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-07 00:54:59.328588 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:03.051) 0:06:32.470 ***** 2026-01-07 00:54:59.328594 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328602 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328606 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328610 | orchestrator | 2026-01-07 00:54:59.328613 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-07 00:54:59.328617 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:00.256) 0:06:32.726 ***** 2026-01-07 00:54:59.328621 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.328625 | orchestrator | 2026-01-07 00:54:59.328628 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-07 00:54:59.328632 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:00.393) 0:06:33.120 ***** 2026-01-07 00:54:59.328640 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:54:59.328644 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:54:59.328648 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:54:59.328652 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-07 00:54:59.328656 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-07 00:54:59.328660 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-07 00:54:59.328664 | orchestrator | 2026-01-07 00:54:59.328667 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-07 00:54:59.328671 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:01.011) 0:06:34.131 ***** 2026-01-07 00:54:59.328676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.328683 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.328688 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.328694 | orchestrator | 2026-01-07 00:54:59.328700 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:54:59.328707 | orchestrator | Wednesday 07 January 2026 00:50:57 +0000 (0:00:01.911) 0:06:36.042 ***** 2026-01-07 00:54:59.328712 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:54:59.328718 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.328724 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.328729 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:54:59.328736 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:54:59.328742 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.328748 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:54:59.328755 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:54:59.328761 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.328768 | orchestrator | 2026-01-07 00:54:59.328774 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-07 00:54:59.328780 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:00.995) 0:06:37.038 ***** 2026-01-07 00:54:59.328786 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.328793 | orchestrator | 2026-01-07 00:54:59.328797 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-07 00:54:59.328801 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:02.365) 0:06:39.404 ***** 2026-01-07 00:54:59.328805 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.328809 | orchestrator | 2026-01-07 00:54:59.328813 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-07 00:54:59.328817 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:00.472) 0:06:39.877 ***** 2026-01-07 00:54:59.328821 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-23474997-0e8b-5abe-afd2-a58c42930ca8', 'data_vg': 'ceph-23474997-0e8b-5abe-afd2-a58c42930ca8'}) 2026-01-07 00:54:59.328827 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-96f57bfe-16b3-5bb1-823a-e63af6581955', 'data_vg': 'ceph-96f57bfe-16b3-5bb1-823a-e63af6581955'}) 2026-01-07 00:54:59.328835 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b296d094-78ce-5ce3-9fe3-598726116dc8', 'data_vg': 'ceph-b296d094-78ce-5ce3-9fe3-598726116dc8'}) 2026-01-07 00:54:59.328839 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-18b58870-6028-5d13-8db0-fb505e00be4b', 'data_vg': 'ceph-18b58870-6028-5d13-8db0-fb505e00be4b'}) 2026-01-07 00:54:59.328843 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73010335-3e9e-51ea-81b3-4dcf5932c07d', 'data_vg': 'ceph-73010335-3e9e-51ea-81b3-4dcf5932c07d'}) 2026-01-07 00:54:59.328847 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e44d1cae-1e57-574a-aa47-ecf7991dd637', 'data_vg': 'ceph-e44d1cae-1e57-574a-aa47-ecf7991dd637'}) 2026-01-07 00:54:59.328855 | orchestrator | 2026-01-07 00:54:59.328859 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-07 00:54:59.328863 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:40.238) 0:07:20.115 ***** 2026-01-07 00:54:59.328867 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.328870 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.328874 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.328878 | orchestrator | 2026-01-07 00:54:59.328882 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-07 00:54:59.328886 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:00.313) 0:07:20.428 ***** 2026-01-07 00:54:59.328890 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.328893 | orchestrator | 2026-01-07 00:54:59.328901 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-07 00:54:59.328907 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:00.487) 0:07:20.915 ***** 2026-01-07 00:54:59.328912 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328918 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328922 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328925 | orchestrator | 2026-01-07 00:54:59.328929 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-07 00:54:59.328933 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:00.860) 0:07:21.776 ***** 2026-01-07 00:54:59.328937 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.328941 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.328945 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.328948 | orchestrator | 2026-01-07 00:54:59.328952 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-07 00:54:59.328956 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:02.195) 0:07:23.971 ***** 2026-01-07 00:54:59.328960 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.328964 | orchestrator | 2026-01-07 00:54:59.328968 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-07 00:54:59.328971 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.491) 0:07:24.462 ***** 2026-01-07 00:54:59.328975 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.328979 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.328983 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.328987 | orchestrator | 2026-01-07 00:54:59.328990 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-07 00:54:59.328994 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:01.358) 0:07:25.821 ***** 2026-01-07 00:54:59.328998 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.329002 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.329006 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.329012 | orchestrator | 2026-01-07 00:54:59.329018 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-07 00:54:59.329029 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:01.105) 0:07:26.927 ***** 2026-01-07 00:54:59.329034 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.329040 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.329046 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.329071 | orchestrator | 2026-01-07 00:54:59.329079 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-07 00:54:59.329086 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:02.014) 0:07:28.942 ***** 2026-01-07 00:54:59.329092 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329099 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329107 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329113 | orchestrator | 2026-01-07 00:54:59.329119 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-07 00:54:59.329133 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:00.327) 0:07:29.269 ***** 2026-01-07 00:54:59.329138 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329144 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329149 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329155 | orchestrator | 2026-01-07 00:54:59.329160 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-07 00:54:59.329166 | orchestrator | Wednesday 07 January 2026 00:51:51 +0000 (0:00:00.615) 0:07:29.885 ***** 2026-01-07 00:54:59.329171 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-07 00:54:59.329177 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-07 00:54:59.329182 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-01-07 00:54:59.329188 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-01-07 00:54:59.329194 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:54:59.329200 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-07 00:54:59.329206 | orchestrator | 2026-01-07 00:54:59.329212 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-07 00:54:59.329219 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:01.064) 0:07:30.949 ***** 2026-01-07 00:54:59.329225 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:54:59.329231 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-07 00:54:59.329238 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-07 00:54:59.329243 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-01-07 00:54:59.329246 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-07 00:54:59.329255 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:54:59.329259 | orchestrator | 2026-01-07 00:54:59.329263 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-07 00:54:59.329267 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:02.207) 0:07:33.157 ***** 2026-01-07 00:54:59.329271 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:54:59.329275 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-07 00:54:59.329279 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-01-07 00:54:59.329282 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-01-07 00:54:59.329287 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-07 00:54:59.329293 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:54:59.329299 | orchestrator | 2026-01-07 00:54:59.329306 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-07 00:54:59.329312 | orchestrator | Wednesday 07 January 2026 00:51:58 +0000 (0:00:03.311) 0:07:36.469 ***** 2026-01-07 00:54:59.329317 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329322 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329329 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.329335 | orchestrator | 2026-01-07 00:54:59.329342 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-07 00:54:59.329348 | orchestrator | Wednesday 07 January 2026 00:52:01 +0000 (0:00:02.896) 0:07:39.365 ***** 2026-01-07 00:54:59.329354 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329360 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329371 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-07 00:54:59.329379 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.329383 | orchestrator | 2026-01-07 00:54:59.329386 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-07 00:54:59.329390 | orchestrator | Wednesday 07 January 2026 00:52:13 +0000 (0:00:12.589) 0:07:51.954 ***** 2026-01-07 00:54:59.329394 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329398 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329402 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329405 | orchestrator | 2026-01-07 00:54:59.329409 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.329417 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:01.050) 0:07:53.005 ***** 2026-01-07 00:54:59.329421 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329425 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329429 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329432 | orchestrator | 2026-01-07 00:54:59.329436 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:54:59.329440 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:00.343) 0:07:53.349 ***** 2026-01-07 00:54:59.329444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.329448 | orchestrator | 2026-01-07 00:54:59.329452 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:54:59.329456 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.523) 0:07:53.873 ***** 2026-01-07 00:54:59.329459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.329463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.329467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.329471 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329474 | orchestrator | 2026-01-07 00:54:59.329478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:54:59.329482 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.640) 0:07:54.514 ***** 2026-01-07 00:54:59.329486 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329489 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329497 | orchestrator | 2026-01-07 00:54:59.329501 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:54:59.329504 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.587) 0:07:55.101 ***** 2026-01-07 00:54:59.329508 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329512 | orchestrator | 2026-01-07 00:54:59.329516 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:54:59.329520 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.236) 0:07:55.338 ***** 2026-01-07 00:54:59.329525 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329533 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329542 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329547 | orchestrator | 2026-01-07 00:54:59.329553 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:54:59.329559 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.329) 0:07:55.667 ***** 2026-01-07 00:54:59.329565 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329572 | orchestrator | 2026-01-07 00:54:59.329578 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:54:59.329583 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.219) 0:07:55.886 ***** 2026-01-07 00:54:59.329591 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329595 | orchestrator | 2026-01-07 00:54:59.329598 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:54:59.329602 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.225) 0:07:56.112 ***** 2026-01-07 00:54:59.329606 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329610 | orchestrator | 2026-01-07 00:54:59.329613 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:54:59.329617 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.122) 0:07:56.235 ***** 2026-01-07 00:54:59.329621 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329625 | orchestrator | 2026-01-07 00:54:59.329633 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:54:59.329637 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.228) 0:07:56.464 ***** 2026-01-07 00:54:59.329645 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329649 | orchestrator | 2026-01-07 00:54:59.329653 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:54:59.329656 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.206) 0:07:56.670 ***** 2026-01-07 00:54:59.329660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.329664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.329668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.329671 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329675 | orchestrator | 2026-01-07 00:54:59.329679 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:54:59.329683 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:01.021) 0:07:57.692 ***** 2026-01-07 00:54:59.329686 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329690 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329694 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329698 | orchestrator | 2026-01-07 00:54:59.329701 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:54:59.329705 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.312) 0:07:58.004 ***** 2026-01-07 00:54:59.329709 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329712 | orchestrator | 2026-01-07 00:54:59.329716 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:54:59.329723 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.223) 0:07:58.228 ***** 2026-01-07 00:54:59.329727 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329731 | orchestrator | 2026-01-07 00:54:59.329735 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-07 00:54:59.329738 | orchestrator | 2026-01-07 00:54:59.329742 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.329746 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.722) 0:07:58.951 ***** 2026-01-07 00:54:59.329750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.329756 | orchestrator | 2026-01-07 00:54:59.329825 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.329829 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:01.283) 0:08:00.234 ***** 2026-01-07 00:54:59.329834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.329838 | orchestrator | 2026-01-07 00:54:59.329842 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.329845 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:01.296) 0:08:01.531 ***** 2026-01-07 00:54:59.329849 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329853 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.329857 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.329861 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.329865 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.329869 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.329873 | orchestrator | 2026-01-07 00:54:59.329876 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.329880 | orchestrator | Wednesday 07 January 2026 00:52:24 +0000 (0:00:01.187) 0:08:02.719 ***** 2026-01-07 00:54:59.329884 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.329888 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.329892 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.329896 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.329899 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.329903 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.329912 | orchestrator | 2026-01-07 00:54:59.329916 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.329920 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:00.714) 0:08:03.433 ***** 2026-01-07 00:54:59.329924 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.329928 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.329932 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.329935 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.329939 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.329943 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.329947 | orchestrator | 2026-01-07 00:54:59.329951 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.329955 | orchestrator | Wednesday 07 January 2026 00:52:26 +0000 (0:00:01.013) 0:08:04.447 ***** 2026-01-07 00:54:59.329958 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.329962 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.329966 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.329970 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.329973 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.329977 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.329981 | orchestrator | 2026-01-07 00:54:59.329985 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.329989 | orchestrator | Wednesday 07 January 2026 00:52:26 +0000 (0:00:00.701) 0:08:05.149 ***** 2026-01-07 00:54:59.329992 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.329996 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330000 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330004 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330008 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330011 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330045 | orchestrator | 2026-01-07 00:54:59.330063 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.330069 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:01.190) 0:08:06.339 ***** 2026-01-07 00:54:59.330075 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330082 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330093 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330105 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330117 | orchestrator | 2026-01-07 00:54:59.330123 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.330129 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:00.604) 0:08:06.943 ***** 2026-01-07 00:54:59.330136 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330140 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330144 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330147 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330151 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330155 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330159 | orchestrator | 2026-01-07 00:54:59.330163 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.330167 | orchestrator | Wednesday 07 January 2026 00:52:29 +0000 (0:00:00.776) 0:08:07.720 ***** 2026-01-07 00:54:59.330170 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330174 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330178 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330182 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330185 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330189 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330193 | orchestrator | 2026-01-07 00:54:59.330199 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.330205 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:01.006) 0:08:08.727 ***** 2026-01-07 00:54:59.330216 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330221 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330231 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330237 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330242 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330248 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330253 | orchestrator | 2026-01-07 00:54:59.330258 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.330264 | orchestrator | Wednesday 07 January 2026 00:52:31 +0000 (0:00:01.346) 0:08:10.073 ***** 2026-01-07 00:54:59.330270 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330276 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330282 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330288 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330293 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330299 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330306 | orchestrator | 2026-01-07 00:54:59.330311 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.330317 | orchestrator | Wednesday 07 January 2026 00:52:32 +0000 (0:00:00.626) 0:08:10.699 ***** 2026-01-07 00:54:59.330325 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330329 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330332 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330336 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330340 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330346 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330352 | orchestrator | 2026-01-07 00:54:59.330358 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.330364 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:00.850) 0:08:11.550 ***** 2026-01-07 00:54:59.330370 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330376 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330381 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330387 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330393 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330399 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330405 | orchestrator | 2026-01-07 00:54:59.330411 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.330417 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:00.568) 0:08:12.118 ***** 2026-01-07 00:54:59.330423 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330429 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330435 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330441 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330447 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330453 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330459 | orchestrator | 2026-01-07 00:54:59.330465 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.330471 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:00.870) 0:08:12.988 ***** 2026-01-07 00:54:59.330478 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330483 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330486 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330490 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330494 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330497 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330501 | orchestrator | 2026-01-07 00:54:59.330505 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.330509 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:00.588) 0:08:13.577 ***** 2026-01-07 00:54:59.330512 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330516 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330520 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330525 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330532 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330549 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330555 | orchestrator | 2026-01-07 00:54:59.330562 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.330568 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.805) 0:08:14.383 ***** 2026-01-07 00:54:59.330574 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330580 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330586 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330592 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:54:59.330599 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:54:59.330604 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:54:59.330610 | orchestrator | 2026-01-07 00:54:59.330617 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.330623 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.592) 0:08:14.975 ***** 2026-01-07 00:54:59.330636 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.330640 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.330643 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.330647 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330651 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330655 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330658 | orchestrator | 2026-01-07 00:54:59.330662 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.330666 | orchestrator | Wednesday 07 January 2026 00:52:37 +0000 (0:00:00.797) 0:08:15.773 ***** 2026-01-07 00:54:59.330670 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330674 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330678 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330681 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330685 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330689 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330692 | orchestrator | 2026-01-07 00:54:59.330696 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.330700 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:00.647) 0:08:16.420 ***** 2026-01-07 00:54:59.330704 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330708 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330711 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330715 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330719 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330723 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330727 | orchestrator | 2026-01-07 00:54:59.330730 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-07 00:54:59.330734 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:01.343) 0:08:17.763 ***** 2026-01-07 00:54:59.330738 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.330742 | orchestrator | 2026-01-07 00:54:59.330750 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-07 00:54:59.330754 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:03.953) 0:08:21.717 ***** 2026-01-07 00:54:59.330758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.330761 | orchestrator | 2026-01-07 00:54:59.330765 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-07 00:54:59.330769 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:02.189) 0:08:23.906 ***** 2026-01-07 00:54:59.330773 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.330777 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.330781 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.330784 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330788 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.330792 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.330796 | orchestrator | 2026-01-07 00:54:59.330799 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-07 00:54:59.330807 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:01.985) 0:08:25.891 ***** 2026-01-07 00:54:59.330811 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.330815 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.330819 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.330822 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.330826 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.330830 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.330834 | orchestrator | 2026-01-07 00:54:59.330838 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-07 00:54:59.330841 | orchestrator | Wednesday 07 January 2026 00:52:48 +0000 (0:00:00.907) 0:08:26.799 ***** 2026-01-07 00:54:59.330846 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.330851 | orchestrator | 2026-01-07 00:54:59.330855 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-07 00:54:59.330859 | orchestrator | Wednesday 07 January 2026 00:52:49 +0000 (0:00:01.178) 0:08:27.978 ***** 2026-01-07 00:54:59.330863 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.330867 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.330870 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.330874 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.330878 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.330882 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.330885 | orchestrator | 2026-01-07 00:54:59.330889 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-07 00:54:59.330893 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:01.732) 0:08:29.710 ***** 2026-01-07 00:54:59.330897 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.330901 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.330904 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.330908 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.330912 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.330916 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.330919 | orchestrator | 2026-01-07 00:54:59.330923 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-07 00:54:59.330927 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:03.522) 0:08:33.233 ***** 2026-01-07 00:54:59.330931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:54:59.330935 | orchestrator | 2026-01-07 00:54:59.330939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-07 00:54:59.330943 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:01.274) 0:08:34.507 ***** 2026-01-07 00:54:59.330946 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.330950 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.330954 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.330958 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.330962 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.330965 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.330969 | orchestrator | 2026-01-07 00:54:59.330973 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-07 00:54:59.330977 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:00.813) 0:08:35.320 ***** 2026-01-07 00:54:59.330981 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.330987 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.330991 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.330995 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:54:59.330999 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:54:59.331002 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:54:59.331006 | orchestrator | 2026-01-07 00:54:59.331010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-07 00:54:59.331017 | orchestrator | Wednesday 07 January 2026 00:52:59 +0000 (0:00:02.128) 0:08:37.448 ***** 2026-01-07 00:54:59.331021 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331025 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331028 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331032 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:54:59.331036 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:54:59.331040 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:54:59.331043 | orchestrator | 2026-01-07 00:54:59.331047 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-07 00:54:59.331083 | orchestrator | 2026-01-07 00:54:59.331087 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.331091 | orchestrator | Wednesday 07 January 2026 00:53:00 +0000 (0:00:01.065) 0:08:38.514 ***** 2026-01-07 00:54:59.331095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.331099 | orchestrator | 2026-01-07 00:54:59.331103 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.331107 | orchestrator | Wednesday 07 January 2026 00:53:00 +0000 (0:00:00.529) 0:08:39.044 ***** 2026-01-07 00:54:59.331113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.331117 | orchestrator | 2026-01-07 00:54:59.331121 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.331125 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:00.749) 0:08:39.794 ***** 2026-01-07 00:54:59.331128 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331132 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331136 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331140 | orchestrator | 2026-01-07 00:54:59.331143 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.331147 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:00.316) 0:08:40.110 ***** 2026-01-07 00:54:59.331151 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331155 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331159 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331162 | orchestrator | 2026-01-07 00:54:59.331166 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.331170 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:00.728) 0:08:40.839 ***** 2026-01-07 00:54:59.331174 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331178 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331181 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331185 | orchestrator | 2026-01-07 00:54:59.331189 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.331193 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:00.882) 0:08:41.722 ***** 2026-01-07 00:54:59.331196 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331200 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331204 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331208 | orchestrator | 2026-01-07 00:54:59.331211 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.331215 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.716) 0:08:42.439 ***** 2026-01-07 00:54:59.331219 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331223 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331226 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331230 | orchestrator | 2026-01-07 00:54:59.331234 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.331238 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.275) 0:08:42.714 ***** 2026-01-07 00:54:59.331241 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331245 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331249 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331256 | orchestrator | 2026-01-07 00:54:59.331260 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.331264 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.240) 0:08:42.955 ***** 2026-01-07 00:54:59.331267 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331271 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331275 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331279 | orchestrator | 2026-01-07 00:54:59.331282 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.331286 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.363) 0:08:43.318 ***** 2026-01-07 00:54:59.331290 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331294 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331297 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331301 | orchestrator | 2026-01-07 00:54:59.331305 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.331309 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:00.678) 0:08:43.996 ***** 2026-01-07 00:54:59.331313 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331316 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331320 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331324 | orchestrator | 2026-01-07 00:54:59.331328 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.331331 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.699) 0:08:44.696 ***** 2026-01-07 00:54:59.331335 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331339 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331343 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331346 | orchestrator | 2026-01-07 00:54:59.331350 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.331354 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.285) 0:08:44.982 ***** 2026-01-07 00:54:59.331358 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331364 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331368 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331372 | orchestrator | 2026-01-07 00:54:59.331376 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.331380 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:00.466) 0:08:45.448 ***** 2026-01-07 00:54:59.331383 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331387 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331391 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331395 | orchestrator | 2026-01-07 00:54:59.331399 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.331402 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:00.288) 0:08:45.737 ***** 2026-01-07 00:54:59.331406 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331410 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331414 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331418 | orchestrator | 2026-01-07 00:54:59.331421 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.331425 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:00.287) 0:08:46.024 ***** 2026-01-07 00:54:59.331429 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331433 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331436 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331440 | orchestrator | 2026-01-07 00:54:59.331444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.331448 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:00.305) 0:08:46.330 ***** 2026-01-07 00:54:59.331451 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331455 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331462 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331465 | orchestrator | 2026-01-07 00:54:59.331469 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.331479 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.456) 0:08:46.786 ***** 2026-01-07 00:54:59.331483 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331487 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331491 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331494 | orchestrator | 2026-01-07 00:54:59.331498 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.331502 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.319) 0:08:47.106 ***** 2026-01-07 00:54:59.331506 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331509 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331513 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331517 | orchestrator | 2026-01-07 00:54:59.331521 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.331524 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.259) 0:08:47.366 ***** 2026-01-07 00:54:59.331528 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331532 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331536 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331539 | orchestrator | 2026-01-07 00:54:59.331543 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.331547 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.316) 0:08:47.683 ***** 2026-01-07 00:54:59.331551 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.331554 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.331558 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.331562 | orchestrator | 2026-01-07 00:54:59.331566 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-07 00:54:59.331569 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.615) 0:08:48.298 ***** 2026-01-07 00:54:59.331573 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331577 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331581 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-07 00:54:59.331585 | orchestrator | 2026-01-07 00:54:59.331588 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-07 00:54:59.331592 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:00.356) 0:08:48.655 ***** 2026-01-07 00:54:59.331596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.331600 | orchestrator | 2026-01-07 00:54:59.331603 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-07 00:54:59.331607 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:01.795) 0:08:50.450 ***** 2026-01-07 00:54:59.331613 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-07 00:54:59.331620 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331627 | orchestrator | 2026-01-07 00:54:59.331633 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-07 00:54:59.331639 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:00.181) 0:08:50.632 ***** 2026-01-07 00:54:59.331647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:54:59.331661 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:54:59.331668 | orchestrator | 2026-01-07 00:54:59.331674 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-07 00:54:59.331681 | orchestrator | Wednesday 07 January 2026 00:53:20 +0000 (0:00:08.137) 0:08:58.770 ***** 2026-01-07 00:54:59.331694 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:54:59.331701 | orchestrator | 2026-01-07 00:54:59.331707 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-07 00:54:59.331713 | orchestrator | Wednesday 07 January 2026 00:53:24 +0000 (0:00:03.697) 0:09:02.467 ***** 2026-01-07 00:54:59.331720 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.331726 | orchestrator | 2026-01-07 00:54:59.331732 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-07 00:54:59.331738 | orchestrator | Wednesday 07 January 2026 00:53:24 +0000 (0:00:00.670) 0:09:03.138 ***** 2026-01-07 00:54:59.331744 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:54:59.331750 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:54:59.331756 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:54:59.331762 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-07 00:54:59.331768 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-07 00:54:59.331774 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-07 00:54:59.331781 | orchestrator | 2026-01-07 00:54:59.331787 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-07 00:54:59.331798 | orchestrator | Wednesday 07 January 2026 00:53:25 +0000 (0:00:01.157) 0:09:04.295 ***** 2026-01-07 00:54:59.331804 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.331811 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.331817 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.331823 | orchestrator | 2026-01-07 00:54:59.331828 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:54:59.331832 | orchestrator | Wednesday 07 January 2026 00:53:28 +0000 (0:00:02.504) 0:09:06.800 ***** 2026-01-07 00:54:59.331836 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:54:59.331840 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.331844 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.331848 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:54:59.331851 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:54:59.331855 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.331859 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:54:59.331863 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:54:59.331866 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.331870 | orchestrator | 2026-01-07 00:54:59.331874 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-07 00:54:59.331878 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:01.494) 0:09:08.295 ***** 2026-01-07 00:54:59.331881 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.331885 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.331889 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.331893 | orchestrator | 2026-01-07 00:54:59.331896 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-07 00:54:59.331900 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:02.601) 0:09:10.896 ***** 2026-01-07 00:54:59.331904 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.331908 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.331912 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.331915 | orchestrator | 2026-01-07 00:54:59.331919 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-07 00:54:59.331923 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:00.554) 0:09:11.451 ***** 2026-01-07 00:54:59.331930 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-07 00:54:59.331934 | orchestrator | 2026-01-07 00:54:59.331938 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-07 00:54:59.331942 | orchestrator | Wednesday 07 January 2026 00:53:34 +0000 (0:00:01.318) 0:09:12.770 ***** 2026-01-07 00:54:59.331946 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.331950 | orchestrator | 2026-01-07 00:54:59.331953 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-07 00:54:59.331957 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.594) 0:09:13.365 ***** 2026-01-07 00:54:59.331961 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.331965 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.331969 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.331972 | orchestrator | 2026-01-07 00:54:59.331976 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-07 00:54:59.331980 | orchestrator | Wednesday 07 January 2026 00:53:36 +0000 (0:00:01.199) 0:09:14.564 ***** 2026-01-07 00:54:59.331984 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.331988 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.331991 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.331995 | orchestrator | 2026-01-07 00:54:59.331999 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-07 00:54:59.332003 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:01.717) 0:09:16.282 ***** 2026-01-07 00:54:59.332006 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.332010 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.332014 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.332018 | orchestrator | 2026-01-07 00:54:59.332021 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-07 00:54:59.332025 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:01.850) 0:09:18.132 ***** 2026-01-07 00:54:59.332029 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.332036 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.332040 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.332043 | orchestrator | 2026-01-07 00:54:59.332047 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-07 00:54:59.332067 | orchestrator | Wednesday 07 January 2026 00:53:41 +0000 (0:00:02.002) 0:09:20.134 ***** 2026-01-07 00:54:59.332075 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332082 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332088 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332095 | orchestrator | 2026-01-07 00:54:59.332102 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.332109 | orchestrator | Wednesday 07 January 2026 00:53:43 +0000 (0:00:01.507) 0:09:21.641 ***** 2026-01-07 00:54:59.332116 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.332121 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.332125 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.332129 | orchestrator | 2026-01-07 00:54:59.332136 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:54:59.332140 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:00.783) 0:09:22.425 ***** 2026-01-07 00:54:59.332144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.332148 | orchestrator | 2026-01-07 00:54:59.332151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:54:59.332155 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:00.782) 0:09:23.207 ***** 2026-01-07 00:54:59.332159 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332163 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332171 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332175 | orchestrator | 2026-01-07 00:54:59.332182 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:54:59.332186 | orchestrator | Wednesday 07 January 2026 00:53:45 +0000 (0:00:00.341) 0:09:23.549 ***** 2026-01-07 00:54:59.332190 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.332194 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.332198 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.332202 | orchestrator | 2026-01-07 00:54:59.332205 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:54:59.332209 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:01.240) 0:09:24.790 ***** 2026-01-07 00:54:59.332213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.332217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.332221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.332225 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332228 | orchestrator | 2026-01-07 00:54:59.332232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:54:59.332236 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:00.868) 0:09:25.658 ***** 2026-01-07 00:54:59.332240 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332244 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332248 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332251 | orchestrator | 2026-01-07 00:54:59.332255 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 00:54:59.332259 | orchestrator | 2026-01-07 00:54:59.332263 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:54:59.332267 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.793) 0:09:26.452 ***** 2026-01-07 00:54:59.332271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.332275 | orchestrator | 2026-01-07 00:54:59.332279 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:54:59.332282 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.521) 0:09:26.973 ***** 2026-01-07 00:54:59.332286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.332290 | orchestrator | 2026-01-07 00:54:59.332294 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:54:59.332298 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.719) 0:09:27.693 ***** 2026-01-07 00:54:59.332302 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332306 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332309 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332313 | orchestrator | 2026-01-07 00:54:59.332317 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:54:59.332321 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.290) 0:09:27.983 ***** 2026-01-07 00:54:59.332325 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332329 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332332 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332336 | orchestrator | 2026-01-07 00:54:59.332340 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:54:59.332344 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.725) 0:09:28.709 ***** 2026-01-07 00:54:59.332348 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332352 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332355 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332359 | orchestrator | 2026-01-07 00:54:59.332363 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:54:59.332367 | orchestrator | Wednesday 07 January 2026 00:53:51 +0000 (0:00:00.720) 0:09:29.429 ***** 2026-01-07 00:54:59.332371 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332375 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332382 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332386 | orchestrator | 2026-01-07 00:54:59.332390 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:54:59.332393 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:01.029) 0:09:30.459 ***** 2026-01-07 00:54:59.332397 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332401 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332405 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332409 | orchestrator | 2026-01-07 00:54:59.332415 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:54:59.332419 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:00.311) 0:09:30.770 ***** 2026-01-07 00:54:59.332423 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332427 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332431 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332434 | orchestrator | 2026-01-07 00:54:59.332438 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:54:59.332442 | orchestrator | Wednesday 07 January 2026 00:53:52 +0000 (0:00:00.304) 0:09:31.075 ***** 2026-01-07 00:54:59.332446 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332450 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332454 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332458 | orchestrator | 2026-01-07 00:54:59.332462 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:54:59.332465 | orchestrator | Wednesday 07 January 2026 00:53:53 +0000 (0:00:00.333) 0:09:31.409 ***** 2026-01-07 00:54:59.332469 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332473 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332477 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332481 | orchestrator | 2026-01-07 00:54:59.332485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:54:59.332488 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:00.991) 0:09:32.400 ***** 2026-01-07 00:54:59.332492 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332496 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332500 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332504 | orchestrator | 2026-01-07 00:54:59.332508 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:54:59.332514 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:00.743) 0:09:33.144 ***** 2026-01-07 00:54:59.332518 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332522 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332526 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332529 | orchestrator | 2026-01-07 00:54:59.332533 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:54:59.332537 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:00.288) 0:09:33.433 ***** 2026-01-07 00:54:59.332541 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332545 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332549 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332553 | orchestrator | 2026-01-07 00:54:59.332556 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:54:59.332560 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:00.296) 0:09:33.729 ***** 2026-01-07 00:54:59.332564 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332568 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332572 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332576 | orchestrator | 2026-01-07 00:54:59.332579 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:54:59.332583 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:00.579) 0:09:34.309 ***** 2026-01-07 00:54:59.332587 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332591 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332595 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332598 | orchestrator | 2026-01-07 00:54:59.332602 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:54:59.332633 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.320) 0:09:34.630 ***** 2026-01-07 00:54:59.332637 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332641 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332645 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332649 | orchestrator | 2026-01-07 00:54:59.332652 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:54:59.332656 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.325) 0:09:34.955 ***** 2026-01-07 00:54:59.332660 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332664 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332668 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332671 | orchestrator | 2026-01-07 00:54:59.332675 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:54:59.332679 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.303) 0:09:35.258 ***** 2026-01-07 00:54:59.332683 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332687 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332691 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332694 | orchestrator | 2026-01-07 00:54:59.332698 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:54:59.332702 | orchestrator | Wednesday 07 January 2026 00:53:57 +0000 (0:00:00.656) 0:09:35.915 ***** 2026-01-07 00:54:59.332706 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332709 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332713 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332717 | orchestrator | 2026-01-07 00:54:59.332721 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:54:59.332725 | orchestrator | Wednesday 07 January 2026 00:53:57 +0000 (0:00:00.302) 0:09:36.218 ***** 2026-01-07 00:54:59.332728 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332732 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332736 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332740 | orchestrator | 2026-01-07 00:54:59.332743 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:54:59.332747 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:00.335) 0:09:36.553 ***** 2026-01-07 00:54:59.332751 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.332755 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.332758 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.332762 | orchestrator | 2026-01-07 00:54:59.332766 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-07 00:54:59.332770 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:00.735) 0:09:37.288 ***** 2026-01-07 00:54:59.332774 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.332777 | orchestrator | 2026-01-07 00:54:59.332781 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:54:59.332788 | orchestrator | Wednesday 07 January 2026 00:53:59 +0000 (0:00:00.506) 0:09:37.794 ***** 2026-01-07 00:54:59.332792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332796 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.332800 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.332803 | orchestrator | 2026-01-07 00:54:59.332807 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:54:59.332811 | orchestrator | Wednesday 07 January 2026 00:54:01 +0000 (0:00:02.161) 0:09:39.955 ***** 2026-01-07 00:54:59.332815 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:54:59.332819 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:54:59.332822 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.332826 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:54:59.332833 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:54:59.332837 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.332841 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:54:59.332845 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:54:59.332848 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.332852 | orchestrator | 2026-01-07 00:54:59.332856 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-07 00:54:59.332860 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:01.605) 0:09:41.561 ***** 2026-01-07 00:54:59.332864 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.332867 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.332874 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.332877 | orchestrator | 2026-01-07 00:54:59.332881 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-07 00:54:59.332885 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.330) 0:09:41.891 ***** 2026-01-07 00:54:59.332889 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.332893 | orchestrator | 2026-01-07 00:54:59.332897 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-07 00:54:59.332901 | orchestrator | Wednesday 07 January 2026 00:54:04 +0000 (0:00:00.558) 0:09:42.450 ***** 2026-01-07 00:54:59.332904 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.332909 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.332913 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.332917 | orchestrator | 2026-01-07 00:54:59.332920 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-07 00:54:59.332924 | orchestrator | Wednesday 07 January 2026 00:54:05 +0000 (0:00:01.293) 0:09:43.744 ***** 2026-01-07 00:54:59.332928 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332932 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:54:59.332936 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332940 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:54:59.332943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332947 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:54:59.332953 | orchestrator | 2026-01-07 00:54:59.332959 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:54:59.332965 | orchestrator | Wednesday 07 January 2026 00:54:09 +0000 (0:00:04.593) 0:09:48.337 ***** 2026-01-07 00:54:59.332972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332979 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.332985 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.332991 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.332997 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:54:59.333004 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:54:59.333010 | orchestrator | 2026-01-07 00:54:59.333021 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:54:59.333028 | orchestrator | Wednesday 07 January 2026 00:54:12 +0000 (0:00:02.588) 0:09:50.926 ***** 2026-01-07 00:54:59.333034 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:54:59.333041 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.333047 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:54:59.333077 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.333084 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:54:59.333089 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.333093 | orchestrator | 2026-01-07 00:54:59.333097 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-07 00:54:59.333105 | orchestrator | Wednesday 07 January 2026 00:54:13 +0000 (0:00:01.286) 0:09:52.213 ***** 2026-01-07 00:54:59.333109 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-07 00:54:59.333113 | orchestrator | 2026-01-07 00:54:59.333117 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-07 00:54:59.333121 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:00.208) 0:09:52.421 ***** 2026-01-07 00:54:59.333124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333144 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333148 | orchestrator | 2026-01-07 00:54:59.333152 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-07 00:54:59.333159 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:01.131) 0:09:53.553 ***** 2026-01-07 00:54:59.333163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:54:59.333182 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333186 | orchestrator | 2026-01-07 00:54:59.333190 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-07 00:54:59.333193 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:00.607) 0:09:54.160 ***** 2026-01-07 00:54:59.333197 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:54:59.333201 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:54:59.333205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:54:59.333214 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:54:59.333218 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:54:59.333221 | orchestrator | 2026-01-07 00:54:59.333225 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-07 00:54:59.333229 | orchestrator | Wednesday 07 January 2026 00:54:45 +0000 (0:00:29.288) 0:10:23.448 ***** 2026-01-07 00:54:59.333233 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333236 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.333240 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.333244 | orchestrator | 2026-01-07 00:54:59.333248 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-07 00:54:59.333251 | orchestrator | Wednesday 07 January 2026 00:54:45 +0000 (0:00:00.323) 0:10:23.771 ***** 2026-01-07 00:54:59.333255 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333259 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.333263 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.333266 | orchestrator | 2026-01-07 00:54:59.333270 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-07 00:54:59.333274 | orchestrator | Wednesday 07 January 2026 00:54:45 +0000 (0:00:00.315) 0:10:24.087 ***** 2026-01-07 00:54:59.333278 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.333281 | orchestrator | 2026-01-07 00:54:59.333285 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-07 00:54:59.333289 | orchestrator | Wednesday 07 January 2026 00:54:46 +0000 (0:00:00.706) 0:10:24.794 ***** 2026-01-07 00:54:59.333293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.333296 | orchestrator | 2026-01-07 00:54:59.333303 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-07 00:54:59.333307 | orchestrator | Wednesday 07 January 2026 00:54:46 +0000 (0:00:00.514) 0:10:25.309 ***** 2026-01-07 00:54:59.333310 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.333314 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.333318 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.333322 | orchestrator | 2026-01-07 00:54:59.333325 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-07 00:54:59.333329 | orchestrator | Wednesday 07 January 2026 00:54:48 +0000 (0:00:01.150) 0:10:26.459 ***** 2026-01-07 00:54:59.333333 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.333337 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.333340 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.333344 | orchestrator | 2026-01-07 00:54:59.333348 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-07 00:54:59.333352 | orchestrator | Wednesday 07 January 2026 00:54:49 +0000 (0:00:01.340) 0:10:27.800 ***** 2026-01-07 00:54:59.333355 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:54:59.333359 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:54:59.333363 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:54:59.333367 | orchestrator | 2026-01-07 00:54:59.333370 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-07 00:54:59.333374 | orchestrator | Wednesday 07 January 2026 00:54:51 +0000 (0:00:01.864) 0:10:29.665 ***** 2026-01-07 00:54:59.333378 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.333385 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.333392 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:54:59.333395 | orchestrator | 2026-01-07 00:54:59.333399 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:54:59.333403 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:02.788) 0:10:32.453 ***** 2026-01-07 00:54:59.333407 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333411 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.333414 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.333418 | orchestrator | 2026-01-07 00:54:59.333422 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:54:59.333426 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:00.361) 0:10:32.815 ***** 2026-01-07 00:54:59.333429 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:54:59.333433 | orchestrator | 2026-01-07 00:54:59.333437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:54:59.333441 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:00.494) 0:10:33.309 ***** 2026-01-07 00:54:59.333445 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.333449 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.333452 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.333456 | orchestrator | 2026-01-07 00:54:59.333460 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:54:59.333464 | orchestrator | Wednesday 07 January 2026 00:54:55 +0000 (0:00:00.554) 0:10:33.863 ***** 2026-01-07 00:54:59.333468 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333471 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:54:59.333475 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:54:59.333479 | orchestrator | 2026-01-07 00:54:59.333483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:54:59.333486 | orchestrator | Wednesday 07 January 2026 00:54:55 +0000 (0:00:00.347) 0:10:34.211 ***** 2026-01-07 00:54:59.333490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:54:59.333494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:54:59.333498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:54:59.333501 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:54:59.333505 | orchestrator | 2026-01-07 00:54:59.333509 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:54:59.333513 | orchestrator | Wednesday 07 January 2026 00:54:56 +0000 (0:00:00.663) 0:10:34.875 ***** 2026-01-07 00:54:59.333517 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:54:59.333520 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:54:59.333524 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:54:59.333528 | orchestrator | 2026-01-07 00:54:59.333532 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:54:59.333536 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-07 00:54:59.333541 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-07 00:54:59.333545 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-07 00:54:59.333549 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-07 00:54:59.333553 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-07 00:54:59.333559 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-07 00:54:59.333566 | orchestrator | 2026-01-07 00:54:59.333569 | orchestrator | 2026-01-07 00:54:59.333573 | orchestrator | 2026-01-07 00:54:59.333577 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:54:59.333581 | orchestrator | Wednesday 07 January 2026 00:54:56 +0000 (0:00:00.238) 0:10:35.113 ***** 2026-01-07 00:54:59.333585 | orchestrator | =============================================================================== 2026-01-07 00:54:59.333588 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 58.18s 2026-01-07 00:54:59.333592 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.24s 2026-01-07 00:54:59.333596 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.29s 2026-01-07 00:54:59.333600 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.29s 2026-01-07 00:54:59.333604 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2026-01-07 00:54:59.333607 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.75s 2026-01-07 00:54:59.333611 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-01-07 00:54:59.333615 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.60s 2026-01-07 00:54:59.333619 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.89s 2026-01-07 00:54:59.333622 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.14s 2026-01-07 00:54:59.333629 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.75s 2026-01-07 00:54:59.333633 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.29s 2026-01-07 00:54:59.333636 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2026-01-07 00:54:59.333640 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.59s 2026-01-07 00:54:59.333644 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.95s 2026-01-07 00:54:59.333648 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.70s 2026-01-07 00:54:59.333651 | orchestrator | ceph-mgr : Get keys from monitors --------------------------------------- 3.57s 2026-01-07 00:54:59.333655 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.52s 2026-01-07 00:54:59.333659 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.31s 2026-01-07 00:54:59.333663 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.14s 2026-01-07 00:54:59.333667 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:54:59.333671 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:54:59.333674 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:54:59.333678 | orchestrator | 2026-01-07 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:02.361178 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:02.362915 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:02.364710 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:02.364773 | orchestrator | 2026-01-07 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:05.403962 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:05.405874 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:05.408851 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:05.409015 | orchestrator | 2026-01-07 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:08.451294 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:08.451901 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:08.453454 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:08.453582 | orchestrator | 2026-01-07 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:11.490161 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:11.491366 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:11.492601 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:11.492831 | orchestrator | 2026-01-07 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:14.528160 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:14.528511 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:14.530440 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:14.530512 | orchestrator | 2026-01-07 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:17.567467 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:17.568802 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:17.570791 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:17.570842 | orchestrator | 2026-01-07 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:20.607687 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:20.607783 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:20.608927 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:20.608979 | orchestrator | 2026-01-07 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:23.653971 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:23.655065 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:23.655618 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:23.655642 | orchestrator | 2026-01-07 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:26.703283 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:26.705670 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:26.707913 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:26.708158 | orchestrator | 2026-01-07 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:29.757650 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:29.760264 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:29.761822 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:29.761886 | orchestrator | 2026-01-07 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:32.812805 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:32.813463 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:32.815180 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:32.815227 | orchestrator | 2026-01-07 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:35.862337 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:35.864856 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:35.867641 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:35.867703 | orchestrator | 2026-01-07 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:38.921621 | orchestrator | 2026-01-07 00:55:38 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:38.923049 | orchestrator | 2026-01-07 00:55:38 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:38.924653 | orchestrator | 2026-01-07 00:55:38 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state STARTED 2026-01-07 00:55:38.924692 | orchestrator | 2026-01-07 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:41.977627 | orchestrator | 2026-01-07 00:55:41 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:41.979184 | orchestrator | 2026-01-07 00:55:41 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:41.984346 | orchestrator | 2026-01-07 00:55:41.984431 | orchestrator | 2026-01-07 00:55:41 | INFO  | Task 4d7662f3-0c73-43ae-a812-b33e991c72cf is in state SUCCESS 2026-01-07 00:55:41.985720 | orchestrator | 2026-01-07 00:55:41.985760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:55:41.985766 | orchestrator | 2026-01-07 00:55:41.985772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:55:41.985778 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:00.216) 0:00:00.216 ***** 2026-01-07 00:55:41.985783 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:55:41.985789 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:55:41.985794 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:55:41.985798 | orchestrator | 2026-01-07 00:55:41.985803 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:55:41.985807 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:00.233) 0:00:00.450 ***** 2026-01-07 00:55:41.985813 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-07 00:55:41.985818 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-07 00:55:41.985835 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-07 00:55:41.985856 | orchestrator | 2026-01-07 00:55:41.985862 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-07 00:55:41.985869 | orchestrator | 2026-01-07 00:55:41.985875 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:55:41.985881 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:00.390) 0:00:00.840 ***** 2026-01-07 00:55:41.985888 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:55:41.985894 | orchestrator | 2026-01-07 00:55:41.985900 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-07 00:55:41.985906 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:00.447) 0:00:01.288 ***** 2026-01-07 00:55:41.985911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:55:41.985917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:55:41.985923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:55:41.985929 | orchestrator | 2026-01-07 00:55:41.985957 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-07 00:55:41.985963 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:00.707) 0:00:01.995 ***** 2026-01-07 00:55:41.985972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.985982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986284 | orchestrator | 2026-01-07 00:55:41.986300 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:55:41.986304 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:01.744) 0:00:03.740 ***** 2026-01-07 00:55:41.986308 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:55:41.986318 | orchestrator | 2026-01-07 00:55:41.986388 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-07 00:55:41.986399 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.443) 0:00:04.183 ***** 2026-01-07 00:55:41.986408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986448 | orchestrator | 2026-01-07 00:55:41.986452 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-07 00:55:41.986456 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:02.635) 0:00:06.819 ***** 2026-01-07 00:55:41.986460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986521 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.986530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986538 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:55:41.986542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986560 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:55:41.986564 | orchestrator | 2026-01-07 00:55:41.986568 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-07 00:55:41.986572 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:01.141) 0:00:07.960 ***** 2026-01-07 00:55:41.986580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986596 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.986604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986609 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:55:41.986615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986624 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:55:41.986628 | orchestrator | 2026-01-07 00:55:41.986631 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-07 00:55:41.986635 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:01.042) 0:00:09.003 ***** 2026-01-07 00:55:41.986640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986706 | orchestrator | 2026-01-07 00:55:41.986710 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-07 00:55:41.986714 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:02.164) 0:00:11.168 ***** 2026-01-07 00:55:41.986718 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.986722 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:55:41.986726 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:55:41.986730 | orchestrator | 2026-01-07 00:55:41.986733 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-07 00:55:41.986737 | orchestrator | Wednesday 07 January 2026 00:53:15 +0000 (0:00:02.164) 0:00:13.332 ***** 2026-01-07 00:55:41.986741 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:55:41.986745 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:55:41.986749 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.986753 | orchestrator | 2026-01-07 00:55:41.986756 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-07 00:55:41.986760 | orchestrator | Wednesday 07 January 2026 00:53:17 +0000 (0:00:02.203) 0:00:15.536 ***** 2026-01-07 00:55:41.986764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 00:55:41.986787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-07 00:55:41.986803 | orchestrator | 2026-01-07 00:55:41.986807 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-07 00:55:41.986811 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:02.157) 0:00:17.694 ***** 2026-01-07 00:55:41.986815 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:55:41.986819 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:55:41.986822 | orchestrator | } 2026-01-07 00:55:41.986826 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:55:41.986830 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:55:41.986834 | orchestrator | } 2026-01-07 00:55:41.986838 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:55:41.986842 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:55:41.986845 | orchestrator | } 2026-01-07 00:55:41.986849 | orchestrator | 2026-01-07 00:55:41.986853 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:55:41.986860 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:00.317) 0:00:18.012 ***** 2026-01-07 00:55:41.986868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986881 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.986885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986898 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:55:41.986905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 00:55:41.986909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-07 00:55:41.986917 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:55:41.986921 | orchestrator | 2026-01-07 00:55:41.986926 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:55:41.986930 | orchestrator | Wednesday 07 January 2026 00:53:21 +0000 (0:00:01.761) 0:00:19.773 ***** 2026-01-07 00:55:41.986991 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.986998 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:55:41.987004 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:55:41.987011 | orchestrator | 2026-01-07 00:55:41.987018 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:55:41.987023 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.377) 0:00:20.150 ***** 2026-01-07 00:55:41.987027 | orchestrator | 2026-01-07 00:55:41.987031 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:55:41.987036 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.071) 0:00:20.222 ***** 2026-01-07 00:55:41.987040 | orchestrator | 2026-01-07 00:55:41.987045 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:55:41.987049 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.065) 0:00:20.288 ***** 2026-01-07 00:55:41.987053 | orchestrator | 2026-01-07 00:55:41.987058 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-07 00:55:41.987062 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.065) 0:00:20.353 ***** 2026-01-07 00:55:41.987067 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.987071 | orchestrator | 2026-01-07 00:55:41.987076 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-07 00:55:41.987080 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.239) 0:00:20.593 ***** 2026-01-07 00:55:41.987084 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:55:41.987089 | orchestrator | 2026-01-07 00:55:41.987093 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-07 00:55:41.987097 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:00.211) 0:00:20.804 ***** 2026-01-07 00:55:41.987102 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.987106 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:55:41.987111 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:55:41.987115 | orchestrator | 2026-01-07 00:55:41.987119 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-07 00:55:41.987124 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:52.159) 0:01:12.964 ***** 2026-01-07 00:55:41.987129 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.987133 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:55:41.987138 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:55:41.987143 | orchestrator | 2026-01-07 00:55:41.987148 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:55:41.987152 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:01:11.424) 0:02:24.389 ***** 2026-01-07 00:55:41.987160 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:55:41.987164 | orchestrator | 2026-01-07 00:55:41.987168 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-07 00:55:41.987172 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:00.509) 0:02:24.898 ***** 2026-01-07 00:55:41.987181 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:55:41.987185 | orchestrator | 2026-01-07 00:55:41.987189 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-07 00:55:41.987193 | orchestrator | Wednesday 07 January 2026 00:55:29 +0000 (0:00:02.637) 0:02:27.536 ***** 2026-01-07 00:55:41.987196 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:55:41.987200 | orchestrator | 2026-01-07 00:55:41.987204 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-07 00:55:41.987211 | orchestrator | Wednesday 07 January 2026 00:55:32 +0000 (0:00:02.570) 0:02:30.106 ***** 2026-01-07 00:55:41.987215 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.987219 | orchestrator | 2026-01-07 00:55:41.987223 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-07 00:55:41.987227 | orchestrator | Wednesday 07 January 2026 00:55:35 +0000 (0:00:03.527) 0:02:33.634 ***** 2026-01-07 00:55:41.987230 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:55:41.987234 | orchestrator | 2026-01-07 00:55:41.987238 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:55:41.987243 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:55:41.987249 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:55:41.987253 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:55:41.987256 | orchestrator | 2026-01-07 00:55:41.987260 | orchestrator | 2026-01-07 00:55:41.987264 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:55:41.987268 | orchestrator | Wednesday 07 January 2026 00:55:38 +0000 (0:00:03.132) 0:02:36.766 ***** 2026-01-07 00:55:41.987272 | orchestrator | =============================================================================== 2026-01-07 00:55:41.987276 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.42s 2026-01-07 00:55:41.987280 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.16s 2026-01-07 00:55:41.987284 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.53s 2026-01-07 00:55:41.987287 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.13s 2026-01-07 00:55:41.987291 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.64s 2026-01-07 00:55:41.987295 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.64s 2026-01-07 00:55:41.987299 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.57s 2026-01-07 00:55:41.987303 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.20s 2026-01-07 00:55:41.987306 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.16s 2026-01-07 00:55:41.987310 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.16s 2026-01-07 00:55:41.987314 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.16s 2026-01-07 00:55:41.987318 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.76s 2026-01-07 00:55:41.987322 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-01-07 00:55:41.987326 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.14s 2026-01-07 00:55:41.987329 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.04s 2026-01-07 00:55:41.987333 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2026-01-07 00:55:41.987337 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-07 00:55:41.987341 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-01-07 00:55:41.987345 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-01-07 00:55:41.987352 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-01-07 00:55:41.987356 | orchestrator | 2026-01-07 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:45.052417 | orchestrator | 2026-01-07 00:55:45 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:45.053967 | orchestrator | 2026-01-07 00:55:45 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:45.054052 | orchestrator | 2026-01-07 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:48.111306 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:48.112849 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:48.112968 | orchestrator | 2026-01-07 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:51.149409 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:51.150291 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:51.150347 | orchestrator | 2026-01-07 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:54.188662 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:54.189554 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:54.189598 | orchestrator | 2026-01-07 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:57.233058 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:55:57.234390 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:55:57.234453 | orchestrator | 2026-01-07 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:00.276765 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:00.278766 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:00.279042 | orchestrator | 2026-01-07 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:03.321505 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:03.323328 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:03.323404 | orchestrator | 2026-01-07 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:06.365652 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:06.366916 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:06.367105 | orchestrator | 2026-01-07 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:09.411920 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:09.414069 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:09.414139 | orchestrator | 2026-01-07 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:12.449092 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:12.452238 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:12.452691 | orchestrator | 2026-01-07 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:15.489033 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:15.490504 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:15.491385 | orchestrator | 2026-01-07 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:18.539911 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:18.541690 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state STARTED 2026-01-07 00:56:18.541744 | orchestrator | 2026-01-07 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:21.585668 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:21.587650 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:21.590948 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:21.596655 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 51575684-b1e8-43d3-8532-2932b79a81c5 is in state SUCCESS 2026-01-07 00:56:21.598806 | orchestrator | 2026-01-07 00:56:21.598888 | orchestrator | 2026-01-07 00:56:21.598902 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-07 00:56:21.598914 | orchestrator | 2026-01-07 00:56:21.598925 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 00:56:21.598937 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:00.087) 0:00:00.087 ***** 2026-01-07 00:56:21.598948 | orchestrator | ok: [localhost] => { 2026-01-07 00:56:21.598962 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-07 00:56:21.598972 | orchestrator | } 2026-01-07 00:56:21.598984 | orchestrator | 2026-01-07 00:56:21.598995 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-07 00:56:21.599006 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:00.044) 0:00:00.131 ***** 2026-01-07 00:56:21.599017 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-07 00:56:21.599030 | orchestrator | ...ignoring 2026-01-07 00:56:21.599041 | orchestrator | 2026-01-07 00:56:21.599051 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-07 00:56:21.599079 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:02.750) 0:00:02.882 ***** 2026-01-07 00:56:21.599091 | orchestrator | skipping: [localhost] 2026-01-07 00:56:21.599102 | orchestrator | 2026-01-07 00:56:21.599112 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-07 00:56:21.599123 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.047) 0:00:02.930 ***** 2026-01-07 00:56:21.599306 | orchestrator | ok: [localhost] 2026-01-07 00:56:21.599727 | orchestrator | 2026-01-07 00:56:21.599742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:56:21.599752 | orchestrator | 2026-01-07 00:56:21.599764 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:56:21.599774 | orchestrator | Wednesday 07 January 2026 00:53:04 +0000 (0:00:00.131) 0:00:03.061 ***** 2026-01-07 00:56:21.599785 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.599822 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.599892 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.599903 | orchestrator | 2026-01-07 00:56:21.599914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:56:21.599924 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:00.276) 0:00:03.337 ***** 2026-01-07 00:56:21.599934 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-07 00:56:21.599946 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-07 00:56:21.599955 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-07 00:56:21.599965 | orchestrator | 2026-01-07 00:56:21.599974 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-07 00:56:21.599983 | orchestrator | 2026-01-07 00:56:21.599992 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-07 00:56:21.600003 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:00.461) 0:00:03.799 ***** 2026-01-07 00:56:21.600041 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:56:21.600052 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 00:56:21.600062 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 00:56:21.600073 | orchestrator | 2026-01-07 00:56:21.600083 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:56:21.600093 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.321) 0:00:04.121 ***** 2026-01-07 00:56:21.600104 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:21.600115 | orchestrator | 2026-01-07 00:56:21.600126 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-07 00:56:21.600137 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:00.536) 0:00:04.657 ***** 2026-01-07 00:56:21.600212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600276 | orchestrator | 2026-01-07 00:56:21.600314 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-07 00:56:21.600326 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:02.683) 0:00:07.341 ***** 2026-01-07 00:56:21.600336 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600346 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.600352 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600358 | orchestrator | 2026-01-07 00:56:21.600364 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-07 00:56:21.600370 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.630) 0:00:07.971 ***** 2026-01-07 00:56:21.600383 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600389 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600395 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.600401 | orchestrator | 2026-01-07 00:56:21.600407 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-07 00:56:21.600413 | orchestrator | Wednesday 07 January 2026 00:53:11 +0000 (0:00:01.188) 0:00:09.160 ***** 2026-01-07 00:56:21.600424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600465 | orchestrator | 2026-01-07 00:56:21.600476 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-07 00:56:21.600487 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:02.788) 0:00:11.949 ***** 2026-01-07 00:56:21.600498 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600507 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600517 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.600527 | orchestrator | 2026-01-07 00:56:21.600538 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-07 00:56:21.600549 | orchestrator | Wednesday 07 January 2026 00:53:15 +0000 (0:00:01.277) 0:00:13.226 ***** 2026-01-07 00:56:21.600558 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.600572 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:21.600583 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:21.600593 | orchestrator | 2026-01-07 00:56:21.600603 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:56:21.600614 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:04.149) 0:00:17.376 ***** 2026-01-07 00:56:21.600624 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:21.600635 | orchestrator | 2026-01-07 00:56:21.600645 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 00:56:21.600657 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:00.502) 0:00:17.879 ***** 2026-01-07 00:56:21.600680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600700 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600721 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.600755 | orchestrator | 2026-01-07 00:56:21.600762 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 00:56:21.600770 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:02.597) 0:00:20.477 ***** 2026-01-07 00:56:21.600781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600788 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.600798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600851 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600859 | orchestrator | 2026-01-07 00:56:21.600865 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 00:56:21.600871 | orchestrator | Wednesday 07 January 2026 00:53:26 +0000 (0:00:03.637) 0:00:24.114 ***** 2026-01-07 00:56:21.600878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600893 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.600909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600916 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.600923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.600934 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.600940 | orchestrator | 2026-01-07 00:56:21.600946 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-07 00:56:21.600953 | orchestrator | Wednesday 07 January 2026 00:53:28 +0000 (0:00:02.347) 0:00:26.462 ***** 2026-01-07 00:56:21.600968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.600996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:56:21.601004 | orchestrator | 2026-01-07 00:56:21.601010 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-07 00:56:21.601016 | orchestrator | Wednesday 07 January 2026 00:53:31 +0000 (0:00:03.157) 0:00:29.619 ***** 2026-01-07 00:56:21.601023 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:56:21.601029 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:56:21.601035 | orchestrator | } 2026-01-07 00:56:21.601042 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:56:21.601048 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:56:21.601054 | orchestrator | } 2026-01-07 00:56:21.601061 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:56:21.601067 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:56:21.601073 | orchestrator | } 2026-01-07 00:56:21.601079 | orchestrator | 2026-01-07 00:56:21.601085 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:56:21.601092 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:00.631) 0:00:30.250 ***** 2026-01-07 00:56:21.601099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.601115 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.601150 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.601180 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601187 | orchestrator | 2026-01-07 00:56:21.601193 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-07 00:56:21.601199 | orchestrator | Wednesday 07 January 2026 00:53:34 +0000 (0:00:02.838) 0:00:33.089 ***** 2026-01-07 00:56:21.601206 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601212 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601219 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601230 | orchestrator | 2026-01-07 00:56:21.601240 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-07 00:56:21.601250 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.534) 0:00:33.623 ***** 2026-01-07 00:56:21.601260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601271 | orchestrator | 2026-01-07 00:56:21.601281 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-07 00:56:21.601292 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:00.183) 0:00:33.807 ***** 2026-01-07 00:56:21.601303 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601312 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601322 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601333 | orchestrator | 2026-01-07 00:56:21.601343 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-07 00:56:21.601354 | orchestrator | Wednesday 07 January 2026 00:53:36 +0000 (0:00:00.597) 0:00:34.404 ***** 2026-01-07 00:56:21.601370 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601380 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601391 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601402 | orchestrator | 2026-01-07 00:56:21.601413 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-07 00:56:21.601423 | orchestrator | Wednesday 07 January 2026 00:53:36 +0000 (0:00:00.364) 0:00:34.769 ***** 2026-01-07 00:56:21.601433 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601444 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601453 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601463 | orchestrator | 2026-01-07 00:56:21.601474 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-07 00:56:21.601484 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.350) 0:00:35.119 ***** 2026-01-07 00:56:21.601494 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601504 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601515 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601525 | orchestrator | 2026-01-07 00:56:21.601536 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-07 00:56:21.601547 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.358) 0:00:35.478 ***** 2026-01-07 00:56:21.601557 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601567 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601592 | orchestrator | 2026-01-07 00:56:21.601602 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-07 00:56:21.601621 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.736) 0:00:36.215 ***** 2026-01-07 00:56:21.601631 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601641 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601652 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601663 | orchestrator | 2026-01-07 00:56:21.601673 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-07 00:56:21.601683 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.468) 0:00:36.683 ***** 2026-01-07 00:56:21.601693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:56:21.601703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:56:21.601714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:56:21.601724 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601734 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:56:21.601745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:56:21.601756 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:56:21.601766 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601776 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:56:21.601786 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:56:21.601796 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:56:21.601806 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601815 | orchestrator | 2026-01-07 00:56:21.601884 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-07 00:56:21.601898 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:00.449) 0:00:37.133 ***** 2026-01-07 00:56:21.601907 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601918 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601928 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.601939 | orchestrator | 2026-01-07 00:56:21.601950 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-07 00:56:21.601960 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:00.312) 0:00:37.445 ***** 2026-01-07 00:56:21.601970 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.601980 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.601991 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602001 | orchestrator | 2026-01-07 00:56:21.602011 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-07 00:56:21.602107 | orchestrator | Wednesday 07 January 2026 00:53:39 +0000 (0:00:00.631) 0:00:38.076 ***** 2026-01-07 00:56:21.602118 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602128 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602139 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602150 | orchestrator | 2026-01-07 00:56:21.602160 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-07 00:56:21.602170 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.522) 0:00:38.599 ***** 2026-01-07 00:56:21.602180 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602190 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602201 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602211 | orchestrator | 2026-01-07 00:56:21.602222 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-07 00:56:21.602233 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.413) 0:00:39.013 ***** 2026-01-07 00:56:21.602243 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602253 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602263 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602274 | orchestrator | 2026-01-07 00:56:21.602284 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-07 00:56:21.602294 | orchestrator | Wednesday 07 January 2026 00:53:41 +0000 (0:00:00.334) 0:00:39.347 ***** 2026-01-07 00:56:21.602313 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602322 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602332 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602343 | orchestrator | 2026-01-07 00:56:21.602352 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-07 00:56:21.602362 | orchestrator | Wednesday 07 January 2026 00:53:41 +0000 (0:00:00.486) 0:00:39.834 ***** 2026-01-07 00:56:21.602371 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602382 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602392 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602403 | orchestrator | 2026-01-07 00:56:21.602414 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-07 00:56:21.602434 | orchestrator | Wednesday 07 January 2026 00:53:42 +0000 (0:00:00.308) 0:00:40.142 ***** 2026-01-07 00:56:21.602445 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602455 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602466 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602477 | orchestrator | 2026-01-07 00:56:21.602488 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-07 00:56:21.602499 | orchestrator | Wednesday 07 January 2026 00:53:42 +0000 (0:00:00.316) 0:00:40.459 ***** 2026-01-07 00:56:21.602519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602532 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602562 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602599 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602610 | orchestrator | 2026-01-07 00:56:21.602621 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-07 00:56:21.602631 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:02.212) 0:00:42.671 ***** 2026-01-07 00:56:21.602641 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602650 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602659 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602667 | orchestrator | 2026-01-07 00:56:21.602676 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-07 00:56:21.602685 | orchestrator | Wednesday 07 January 2026 00:53:44 +0000 (0:00:00.318) 0:00:42.990 ***** 2026-01-07 00:56:21.602694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602723 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602754 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:21.602782 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602792 | orchestrator | 2026-01-07 00:56:21.602800 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-07 00:56:21.602810 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:02.319) 0:00:45.309 ***** 2026-01-07 00:56:21.602821 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602857 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602868 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602877 | orchestrator | 2026-01-07 00:56:21.602887 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-07 00:56:21.602902 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:00.324) 0:00:45.634 ***** 2026-01-07 00:56:21.602912 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602922 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602932 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.602942 | orchestrator | 2026-01-07 00:56:21.602952 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-07 00:56:21.602963 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:00.315) 0:00:45.949 ***** 2026-01-07 00:56:21.602973 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.602983 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.602993 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603003 | orchestrator | 2026-01-07 00:56:21.603013 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-07 00:56:21.603023 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.328) 0:00:46.277 ***** 2026-01-07 00:56:21.603033 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603045 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603056 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603067 | orchestrator | 2026-01-07 00:56:21.603077 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-07 00:56:21.603087 | orchestrator | Wednesday 07 January 2026 00:53:48 +0000 (0:00:00.680) 0:00:46.957 ***** 2026-01-07 00:56:21.603102 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603112 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603122 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603131 | orchestrator | 2026-01-07 00:56:21.603142 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-07 00:56:21.603152 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:00.332) 0:00:47.290 ***** 2026-01-07 00:56:21.603162 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.603172 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:21.603184 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:21.603190 | orchestrator | 2026-01-07 00:56:21.603196 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-07 00:56:21.603209 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.896) 0:00:48.187 ***** 2026-01-07 00:56:21.603215 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603222 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.603228 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.603234 | orchestrator | 2026-01-07 00:56:21.603240 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-07 00:56:21.603246 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.506) 0:00:48.693 ***** 2026-01-07 00:56:21.603252 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603258 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.603264 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.603270 | orchestrator | 2026-01-07 00:56:21.603276 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-07 00:56:21.603282 | orchestrator | Wednesday 07 January 2026 00:53:50 +0000 (0:00:00.337) 0:00:49.031 ***** 2026-01-07 00:56:21.603290 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-07 00:56:21.603298 | orchestrator | ...ignoring 2026-01-07 00:56:21.603304 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-07 00:56:21.603310 | orchestrator | ...ignoring 2026-01-07 00:56:21.603316 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-07 00:56:21.603322 | orchestrator | ...ignoring 2026-01-07 00:56:21.603328 | orchestrator | 2026-01-07 00:56:21.603335 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-07 00:56:21.603341 | orchestrator | Wednesday 07 January 2026 00:54:01 +0000 (0:00:10.794) 0:00:59.825 ***** 2026-01-07 00:56:21.603347 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603353 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.603359 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.603365 | orchestrator | 2026-01-07 00:56:21.603371 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-07 00:56:21.603378 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:00.319) 0:01:00.144 ***** 2026-01-07 00:56:21.603384 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603390 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603396 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603402 | orchestrator | 2026-01-07 00:56:21.603408 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-07 00:56:21.603414 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:00.488) 0:01:00.633 ***** 2026-01-07 00:56:21.603420 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603426 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603432 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603438 | orchestrator | 2026-01-07 00:56:21.603445 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-07 00:56:21.603451 | orchestrator | Wednesday 07 January 2026 00:54:02 +0000 (0:00:00.338) 0:01:00.971 ***** 2026-01-07 00:56:21.603457 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603463 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603469 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603475 | orchestrator | 2026-01-07 00:56:21.603481 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-07 00:56:21.603488 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.343) 0:01:01.314 ***** 2026-01-07 00:56:21.603494 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603509 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.603516 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.603522 | orchestrator | 2026-01-07 00:56:21.603528 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-07 00:56:21.603541 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.324) 0:01:01.639 ***** 2026-01-07 00:56:21.603547 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603559 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603566 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603572 | orchestrator | 2026-01-07 00:56:21.603579 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:56:21.603585 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:00.462) 0:01:02.102 ***** 2026-01-07 00:56:21.603591 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603597 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603603 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-07 00:56:21.603610 | orchestrator | 2026-01-07 00:56:21.603616 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-07 00:56:21.603622 | orchestrator | Wednesday 07 January 2026 00:54:04 +0000 (0:00:00.391) 0:01:02.493 ***** 2026-01-07 00:56:21.603629 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.603635 | orchestrator | 2026-01-07 00:56:21.603641 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-07 00:56:21.603648 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:10.091) 0:01:12.585 ***** 2026-01-07 00:56:21.603654 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603660 | orchestrator | 2026-01-07 00:56:21.603667 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:56:21.603677 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:00.137) 0:01:12.722 ***** 2026-01-07 00:56:21.603684 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603690 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603696 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603702 | orchestrator | 2026-01-07 00:56:21.603709 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-07 00:56:21.603715 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:01.097) 0:01:13.819 ***** 2026-01-07 00:56:21.603721 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.603727 | orchestrator | 2026-01-07 00:56:21.603733 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-07 00:56:21.603740 | orchestrator | Wednesday 07 January 2026 00:54:24 +0000 (0:00:09.288) 0:01:23.107 ***** 2026-01-07 00:56:21.603746 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603752 | orchestrator | 2026-01-07 00:56:21.603758 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-07 00:56:21.603765 | orchestrator | Wednesday 07 January 2026 00:54:26 +0000 (0:00:01.552) 0:01:24.660 ***** 2026-01-07 00:56:21.603772 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.603778 | orchestrator | 2026-01-07 00:56:21.603784 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-07 00:56:21.603790 | orchestrator | Wednesday 07 January 2026 00:54:29 +0000 (0:00:02.639) 0:01:27.300 ***** 2026-01-07 00:56:21.603796 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.603803 | orchestrator | 2026-01-07 00:56:21.603809 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-07 00:56:21.603815 | orchestrator | Wednesday 07 January 2026 00:54:29 +0000 (0:00:00.128) 0:01:27.429 ***** 2026-01-07 00:56:21.603821 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603848 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.603855 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.603861 | orchestrator | 2026-01-07 00:56:21.603868 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-07 00:56:21.603874 | orchestrator | Wednesday 07 January 2026 00:54:29 +0000 (0:00:00.287) 0:01:27.716 ***** 2026-01-07 00:56:21.603880 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.603887 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-07 00:56:21.603893 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:21.603905 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:21.603912 | orchestrator | 2026-01-07 00:56:21.603918 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-07 00:56:21.603925 | orchestrator | skipping: no hosts matched 2026-01-07 00:56:21.603931 | orchestrator | 2026-01-07 00:56:21.603937 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 00:56:21.603943 | orchestrator | 2026-01-07 00:56:21.603950 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:56:21.603956 | orchestrator | Wednesday 07 January 2026 00:54:30 +0000 (0:00:00.456) 0:01:28.172 ***** 2026-01-07 00:56:21.603963 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:21.603969 | orchestrator | 2026-01-07 00:56:21.603975 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:56:21.603982 | orchestrator | Wednesday 07 January 2026 00:54:51 +0000 (0:00:21.043) 0:01:49.216 ***** 2026-01-07 00:56:21.603988 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.603994 | orchestrator | 2026-01-07 00:56:21.604001 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:56:21.604007 | orchestrator | Wednesday 07 January 2026 00:55:01 +0000 (0:00:10.578) 0:01:59.795 ***** 2026-01-07 00:56:21.604013 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.604020 | orchestrator | 2026-01-07 00:56:21.604026 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 00:56:21.604052 | orchestrator | 2026-01-07 00:56:21.604059 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:56:21.604066 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:02.805) 0:02:02.601 ***** 2026-01-07 00:56:21.604072 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:21.604079 | orchestrator | 2026-01-07 00:56:21.604085 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:56:21.604091 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:21.890) 0:02:24.492 ***** 2026-01-07 00:56:21.604098 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.604104 | orchestrator | 2026-01-07 00:56:21.604110 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:56:21.604116 | orchestrator | Wednesday 07 January 2026 00:55:37 +0000 (0:00:10.728) 0:02:35.220 ***** 2026-01-07 00:56:21.604123 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.604129 | orchestrator | 2026-01-07 00:56:21.604135 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-07 00:56:21.604142 | orchestrator | 2026-01-07 00:56:21.604152 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:56:21.604158 | orchestrator | Wednesday 07 January 2026 00:55:39 +0000 (0:00:02.576) 0:02:37.797 ***** 2026-01-07 00:56:21.604165 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.604171 | orchestrator | 2026-01-07 00:56:21.604177 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:56:21.604184 | orchestrator | Wednesday 07 January 2026 00:55:51 +0000 (0:00:11.806) 0:02:49.603 ***** 2026-01-07 00:56:21.604190 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.604196 | orchestrator | 2026-01-07 00:56:21.604203 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:56:21.604209 | orchestrator | Wednesday 07 January 2026 00:55:56 +0000 (0:00:04.666) 0:02:54.270 ***** 2026-01-07 00:56:21.604216 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.604222 | orchestrator | 2026-01-07 00:56:21.604228 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-07 00:56:21.604235 | orchestrator | 2026-01-07 00:56:21.604241 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-07 00:56:21.604247 | orchestrator | Wednesday 07 January 2026 00:55:58 +0000 (0:00:02.802) 0:02:57.072 ***** 2026-01-07 00:56:21.604254 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:21.604260 | orchestrator | 2026-01-07 00:56:21.604270 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-07 00:56:21.604281 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:00.523) 0:02:57.595 ***** 2026-01-07 00:56:21.604288 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604294 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604300 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.604306 | orchestrator | 2026-01-07 00:56:21.604313 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-07 00:56:21.604319 | orchestrator | Wednesday 07 January 2026 00:56:01 +0000 (0:00:02.410) 0:03:00.006 ***** 2026-01-07 00:56:21.604325 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604331 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604337 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.604344 | orchestrator | 2026-01-07 00:56:21.604350 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-07 00:56:21.604357 | orchestrator | Wednesday 07 January 2026 00:56:04 +0000 (0:00:02.694) 0:03:02.700 ***** 2026-01-07 00:56:21.604363 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604369 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604375 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.604382 | orchestrator | 2026-01-07 00:56:21.604388 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-07 00:56:21.604394 | orchestrator | Wednesday 07 January 2026 00:56:07 +0000 (0:00:02.780) 0:03:05.481 ***** 2026-01-07 00:56:21.604401 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604407 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604413 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:21.604419 | orchestrator | 2026-01-07 00:56:21.604426 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-07 00:56:21.604432 | orchestrator | Wednesday 07 January 2026 00:56:09 +0000 (0:00:02.485) 0:03:07.967 ***** 2026-01-07 00:56:21.604438 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.604445 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.604451 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.604457 | orchestrator | 2026-01-07 00:56:21.604464 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-07 00:56:21.604470 | orchestrator | Wednesday 07 January 2026 00:56:14 +0000 (0:00:04.247) 0:03:12.215 ***** 2026-01-07 00:56:21.604476 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.604482 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604489 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604495 | orchestrator | 2026-01-07 00:56:21.604501 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-07 00:56:21.604508 | orchestrator | Wednesday 07 January 2026 00:56:16 +0000 (0:00:02.253) 0:03:14.468 ***** 2026-01-07 00:56:21.604514 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.604520 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604527 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604533 | orchestrator | 2026-01-07 00:56:21.604539 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-07 00:56:21.604546 | orchestrator | Wednesday 07 January 2026 00:56:16 +0000 (0:00:00.508) 0:03:14.977 ***** 2026-01-07 00:56:21.604552 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:21.604558 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:21.604565 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:21.604571 | orchestrator | 2026-01-07 00:56:21.604577 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-07 00:56:21.604584 | orchestrator | Wednesday 07 January 2026 00:56:19 +0000 (0:00:02.795) 0:03:17.772 ***** 2026-01-07 00:56:21.604590 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:21.604596 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:21.604603 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:21.604609 | orchestrator | 2026-01-07 00:56:21.604615 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:56:21.604627 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 00:56:21.604633 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-07 00:56:21.604641 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-07 00:56:21.604647 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-07 00:56:21.604653 | orchestrator | 2026-01-07 00:56:21.604660 | orchestrator | 2026-01-07 00:56:21.604669 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:56:21.604676 | orchestrator | Wednesday 07 January 2026 00:56:20 +0000 (0:00:00.394) 0:03:18.167 ***** 2026-01-07 00:56:21.604682 | orchestrator | =============================================================================== 2026-01-07 00:56:21.604689 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.93s 2026-01-07 00:56:21.604695 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.31s 2026-01-07 00:56:21.604701 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.81s 2026-01-07 00:56:21.604708 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.79s 2026-01-07 00:56:21.604714 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.09s 2026-01-07 00:56:21.604720 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.29s 2026-01-07 00:56:21.604726 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.38s 2026-01-07 00:56:21.604733 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.67s 2026-01-07 00:56:21.604743 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.25s 2026-01-07 00:56:21.604749 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.15s 2026-01-07 00:56:21.604755 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.64s 2026-01-07 00:56:21.604762 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.16s 2026-01-07 00:56:21.604768 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.84s 2026-01-07 00:56:21.604774 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-01-07 00:56:21.604780 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.80s 2026-01-07 00:56:21.604787 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.79s 2026-01-07 00:56:21.604793 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.78s 2026-01-07 00:56:21.604800 | orchestrator | Check MariaDB service --------------------------------------------------- 2.75s 2026-01-07 00:56:21.604806 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.69s 2026-01-07 00:56:21.604812 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.68s 2026-01-07 00:56:21.604818 | orchestrator | 2026-01-07 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:24.637759 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:24.639332 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:24.639399 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:24.639413 | orchestrator | 2026-01-07 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:27.668240 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:27.670877 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:27.674281 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:27.674340 | orchestrator | 2026-01-07 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:30.706862 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:30.707715 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:30.708797 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:30.708938 | orchestrator | 2026-01-07 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:33.741311 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:33.741960 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:33.742920 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:33.742935 | orchestrator | 2026-01-07 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:36.788151 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:36.788327 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:36.791660 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:36.791752 | orchestrator | 2026-01-07 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:39.825272 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:39.826294 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:39.827205 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:39.827243 | orchestrator | 2026-01-07 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:42.878181 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:42.878856 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:42.880360 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:42.880680 | orchestrator | 2026-01-07 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:45.910190 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:45.912275 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:45.913858 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:45.913895 | orchestrator | 2026-01-07 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:48.960171 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:48.964435 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:48.965015 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:48.965201 | orchestrator | 2026-01-07 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:52.010649 | orchestrator | 2026-01-07 00:56:52 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:52.014304 | orchestrator | 2026-01-07 00:56:52 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:52.016235 | orchestrator | 2026-01-07 00:56:52 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:52.016326 | orchestrator | 2026-01-07 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:55.056133 | orchestrator | 2026-01-07 00:56:55 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:55.057843 | orchestrator | 2026-01-07 00:56:55 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:55.059290 | orchestrator | 2026-01-07 00:56:55 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:55.059350 | orchestrator | 2026-01-07 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:58.109102 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:56:58.110803 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:56:58.112422 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:56:58.112465 | orchestrator | 2026-01-07 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:01.147494 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:01.148869 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:57:01.151320 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:01.151386 | orchestrator | 2026-01-07 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:04.196100 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:04.199013 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:57:04.201278 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:04.201331 | orchestrator | 2026-01-07 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:07.237166 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:07.239687 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state STARTED 2026-01-07 00:57:07.246628 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:07.246745 | orchestrator | 2026-01-07 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:10.297900 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:10.301771 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task 72d46363-048f-4aeb-9037-e14999842aa0 is in state SUCCESS 2026-01-07 00:57:10.303559 | orchestrator | 2026-01-07 00:57:10.303612 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:57:10.303618 | orchestrator | 2.16.14 2026-01-07 00:57:10.303624 | orchestrator | 2026-01-07 00:57:10.303639 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-07 00:57:10.303647 | orchestrator | 2026-01-07 00:57:10.303656 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 00:57:10.303666 | orchestrator | Wednesday 07 January 2026 00:55:01 +0000 (0:00:00.473) 0:00:00.473 ***** 2026-01-07 00:57:10.303673 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:57:10.303681 | orchestrator | 2026-01-07 00:57:10.303688 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 00:57:10.303694 | orchestrator | Wednesday 07 January 2026 00:55:02 +0000 (0:00:00.445) 0:00:00.919 ***** 2026-01-07 00:57:10.303715 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303722 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303729 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303736 | orchestrator | 2026-01-07 00:57:10.303742 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 00:57:10.303749 | orchestrator | Wednesday 07 January 2026 00:55:02 +0000 (0:00:00.704) 0:00:01.623 ***** 2026-01-07 00:57:10.303756 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303763 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303769 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303775 | orchestrator | 2026-01-07 00:57:10.303782 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 00:57:10.303789 | orchestrator | Wednesday 07 January 2026 00:55:03 +0000 (0:00:00.261) 0:00:01.885 ***** 2026-01-07 00:57:10.303796 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303803 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303809 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303817 | orchestrator | 2026-01-07 00:57:10.303824 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 00:57:10.303831 | orchestrator | Wednesday 07 January 2026 00:55:03 +0000 (0:00:00.840) 0:00:02.726 ***** 2026-01-07 00:57:10.303837 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303843 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303848 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303852 | orchestrator | 2026-01-07 00:57:10.303856 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 00:57:10.303861 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:00.264) 0:00:02.991 ***** 2026-01-07 00:57:10.303865 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303869 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303873 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303877 | orchestrator | 2026-01-07 00:57:10.303882 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 00:57:10.303886 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:00.276) 0:00:03.267 ***** 2026-01-07 00:57:10.303890 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.303894 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.303898 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.303903 | orchestrator | 2026-01-07 00:57:10.303907 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 00:57:10.303911 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:00.286) 0:00:03.554 ***** 2026-01-07 00:57:10.303916 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.303920 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.303925 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.303929 | orchestrator | 2026-01-07 00:57:10.303933 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 00:57:10.303940 | orchestrator | Wednesday 07 January 2026 00:55:05 +0000 (0:00:00.393) 0:00:03.947 ***** 2026-01-07 00:57:10.304042 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.304052 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.304057 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.304061 | orchestrator | 2026-01-07 00:57:10.304065 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 00:57:10.304069 | orchestrator | Wednesday 07 January 2026 00:55:05 +0000 (0:00:00.287) 0:00:04.235 ***** 2026-01-07 00:57:10.304073 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:57:10.304078 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:57:10.304082 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:57:10.304086 | orchestrator | 2026-01-07 00:57:10.304090 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 00:57:10.304094 | orchestrator | Wednesday 07 January 2026 00:55:05 +0000 (0:00:00.601) 0:00:04.836 ***** 2026-01-07 00:57:10.304101 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.304498 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.304509 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.304516 | orchestrator | 2026-01-07 00:57:10.304523 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 00:57:10.304529 | orchestrator | Wednesday 07 January 2026 00:55:06 +0000 (0:00:00.370) 0:00:05.207 ***** 2026-01-07 00:57:10.304535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:57:10.304542 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:57:10.304549 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:57:10.304556 | orchestrator | 2026-01-07 00:57:10.304563 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 00:57:10.304570 | orchestrator | Wednesday 07 January 2026 00:55:08 +0000 (0:00:01.918) 0:00:07.126 ***** 2026-01-07 00:57:10.304577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:57:10.304584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:57:10.304592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:57:10.304599 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304606 | orchestrator | 2026-01-07 00:57:10.304639 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 00:57:10.304652 | orchestrator | Wednesday 07 January 2026 00:55:08 +0000 (0:00:00.521) 0:00:07.648 ***** 2026-01-07 00:57:10.304657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304673 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304677 | orchestrator | 2026-01-07 00:57:10.304681 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 00:57:10.304685 | orchestrator | Wednesday 07 January 2026 00:55:09 +0000 (0:00:00.654) 0:00:08.302 ***** 2026-01-07 00:57:10.304691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.304737 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304742 | orchestrator | 2026-01-07 00:57:10.304746 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 00:57:10.304750 | orchestrator | Wednesday 07 January 2026 00:55:09 +0000 (0:00:00.265) 0:00:08.568 ***** 2026-01-07 00:57:10.304756 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '33d0da6f9c05', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:55:06.957397', 'end': '2026-01-07 00:55:06.984054', 'delta': '0:00:00.026657', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33d0da6f9c05'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-07 00:57:10.304763 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e749ce2e3140', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:55:07.609607', 'end': '2026-01-07 00:55:07.637595', 'delta': '0:00:00.027988', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e749ce2e3140'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-07 00:57:10.304788 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6ea61690a4c9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:55:08.095372', 'end': '2026-01-07 00:55:08.132399', 'delta': '0:00:00.037027', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6ea61690a4c9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-07 00:57:10.304793 | orchestrator | 2026-01-07 00:57:10.304798 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 00:57:10.304814 | orchestrator | Wednesday 07 January 2026 00:55:09 +0000 (0:00:00.169) 0:00:08.737 ***** 2026-01-07 00:57:10.304819 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.304823 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.304832 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.304836 | orchestrator | 2026-01-07 00:57:10.304840 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 00:57:10.304845 | orchestrator | Wednesday 07 January 2026 00:55:10 +0000 (0:00:00.381) 0:00:09.119 ***** 2026-01-07 00:57:10.304849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-07 00:57:10.304853 | orchestrator | 2026-01-07 00:57:10.304858 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 00:57:10.304862 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:01.694) 0:00:10.814 ***** 2026-01-07 00:57:10.304866 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304870 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.304874 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.304878 | orchestrator | 2026-01-07 00:57:10.304883 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 00:57:10.304887 | orchestrator | Wednesday 07 January 2026 00:55:12 +0000 (0:00:00.247) 0:00:11.061 ***** 2026-01-07 00:57:10.304891 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304895 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.304899 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.304904 | orchestrator | 2026-01-07 00:57:10.304908 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:57:10.304912 | orchestrator | Wednesday 07 January 2026 00:55:12 +0000 (0:00:00.352) 0:00:11.414 ***** 2026-01-07 00:57:10.304916 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304920 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.304924 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.304929 | orchestrator | 2026-01-07 00:57:10.304935 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 00:57:10.304942 | orchestrator | Wednesday 07 January 2026 00:55:12 +0000 (0:00:00.449) 0:00:11.863 ***** 2026-01-07 00:57:10.304954 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.304961 | orchestrator | 2026-01-07 00:57:10.304967 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 00:57:10.304974 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:00.126) 0:00:11.990 ***** 2026-01-07 00:57:10.304980 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.304987 | orchestrator | 2026-01-07 00:57:10.304993 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:57:10.305000 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:00.206) 0:00:12.196 ***** 2026-01-07 00:57:10.305007 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305013 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305020 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305027 | orchestrator | 2026-01-07 00:57:10.305034 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 00:57:10.305042 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:00.248) 0:00:12.445 ***** 2026-01-07 00:57:10.305047 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305051 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305055 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305060 | orchestrator | 2026-01-07 00:57:10.305064 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 00:57:10.305068 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:00.275) 0:00:12.720 ***** 2026-01-07 00:57:10.305072 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305076 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305081 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305085 | orchestrator | 2026-01-07 00:57:10.305089 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 00:57:10.305093 | orchestrator | Wednesday 07 January 2026 00:55:14 +0000 (0:00:00.378) 0:00:13.098 ***** 2026-01-07 00:57:10.305097 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305106 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305110 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305114 | orchestrator | 2026-01-07 00:57:10.305119 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 00:57:10.305125 | orchestrator | Wednesday 07 January 2026 00:55:14 +0000 (0:00:00.271) 0:00:13.370 ***** 2026-01-07 00:57:10.305129 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305134 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305139 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305144 | orchestrator | 2026-01-07 00:57:10.305149 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 00:57:10.305154 | orchestrator | Wednesday 07 January 2026 00:55:14 +0000 (0:00:00.274) 0:00:13.645 ***** 2026-01-07 00:57:10.305159 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305164 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305168 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305194 | orchestrator | 2026-01-07 00:57:10.305200 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 00:57:10.305211 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.285) 0:00:13.930 ***** 2026-01-07 00:57:10.305219 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305226 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305232 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305240 | orchestrator | 2026-01-07 00:57:10.305248 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 00:57:10.305256 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.386) 0:00:14.317 ***** 2026-01-07 00:57:10.305265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8', 'dm-uuid-LVM-V5VfCcYGKl4Bnur1uNiNQmiaWW7ddFt6yluzzHTHlitz361XN7j045GmAnuIzDE8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b', 'dm-uuid-LVM-wc4kI6OqwAscwmmkndvJZpG8N6izNye5C1HoNvgtq5hrMpHm7PsUJkX9BVSVRxeq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8', 'dm-uuid-LVM-Zr5ep2rmKcYwUjCbEzdIFdOSaEWKuROcCxfBJuzGV2HesNAu0o0smJSLlEI3yrFN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d', 'dm-uuid-LVM-i0wlbpFvqRihHUwefM4dHK3dwlVDMjbtySInas1puXuPoXmmLkM0U18P6QAryVUz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bobOpU-MYfr-y4Ef-vOoQ-ehNp-x3D7-cMoYMn', 'scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a', 'scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFdUZj-kLPS-OcbZ-VKZl-KciB-jGso-qCnklM', 'scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9', 'scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4', 'scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oeVac9-FCb1-x3bL-GDLT-GpDm-oKMP-vtGmCV', 'scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83', 'scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7fOV3-0dfJ-XnKV-BIi2-7zfJ-1I70-sGC1en', 'scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d', 'scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8', 'scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305535 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.305539 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.305544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955', 'dm-uuid-LVM-Rujcq0UkmlYflKsC4fd33Dkl5dBRwjS65A8s9BZ4s4y1kvUR8RL7YEeRphzA7scE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637', 'dm-uuid-LVM-iTQuPrx0FTrMFHWPXcV7DY3IVPYTbJBCy5T5YfzJ7HdIkSfe6dduLErg3NlIYd5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:57:10.305626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eH03hQ-AV7L-sq1w-ZK3M-bMB9-XpM9-NquE15', 'scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb', 'scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sHkOms-DWVH-8KmS-PD8X-N66N-HIm4-33sBky', 'scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6', 'scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e', 'scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:57:10.305747 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.305751 | orchestrator | 2026-01-07 00:57:10.305756 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 00:57:10.305760 | orchestrator | Wednesday 07 January 2026 00:55:15 +0000 (0:00:00.522) 0:00:14.839 ***** 2026-01-07 00:57:10.305766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8', 'dm-uuid-LVM-V5VfCcYGKl4Bnur1uNiNQmiaWW7ddFt6yluzzHTHlitz361XN7j045GmAnuIzDE8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b', 'dm-uuid-LVM-wc4kI6OqwAscwmmkndvJZpG8N6izNye5C1HoNvgtq5hrMpHm7PsUJkX9BVSVRxeq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b4668cd-eddb-4f75-af43-c9aa7c282e38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8', 'dm-uuid-LVM-Zr5ep2rmKcYwUjCbEzdIFdOSaEWKuROcCxfBJuzGV2HesNAu0o0smJSLlEI3yrFN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--23474997--0e8b--5abe--afd2--a58c42930ca8-osd--block--23474997--0e8b--5abe--afd2--a58c42930ca8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bobOpU-MYfr-y4Ef-vOoQ-ehNp-x3D7-cMoYMn', 'scsi-0QEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a', 'scsi-SQEMU_QEMU_HARDDISK_ed5d5180-ce4b-4e07-b5d5-8188f5330d5a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d', 'dm-uuid-LVM-i0wlbpFvqRihHUwefM4dHK3dwlVDMjbtySInas1puXuPoXmmLkM0U18P6QAryVUz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--18b58870--6028--5d13--8db0--fb505e00be4b-osd--block--18b58870--6028--5d13--8db0--fb505e00be4b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fFdUZj-kLPS-OcbZ-VKZl-KciB-jGso-qCnklM', 'scsi-0QEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9', 'scsi-SQEMU_QEMU_HARDDISK_e61db5f0-3441-47eb-ad05-afa38bd974c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.305928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4', 'scsi-SQEMU_QEMU_HARDDISK_451db668-b89a-4789-9627-d8fc1f6d5aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306095 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_28c074ed-67b3-4d4b-904c-ddd92b24aa2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b296d094--78ce--5ce3--9fe3--598726116dc8-osd--block--b296d094--78ce--5ce3--9fe3--598726116dc8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oeVac9-FCb1-x3bL-GDLT-GpDm-oKMP-vtGmCV', 'scsi-0QEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83', 'scsi-SQEMU_QEMU_HARDDISK_ac2b7251-4646-40ec-bf32-0660e60c3d83'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--73010335--3e9e--51ea--81b3--4dcf5932c07d-osd--block--73010335--3e9e--51ea--81b3--4dcf5932c07d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z7fOV3-0dfJ-XnKV-BIi2-7zfJ-1I70-sGC1en', 'scsi-0QEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d', 'scsi-SQEMU_QEMU_HARDDISK_af026b45-5f1b-4363-b58a-40461e27717d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306144 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8', 'scsi-SQEMU_QEMU_HARDDISK_39bf8256-2574-48b9-8944-112b8c6b12d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955', 'dm-uuid-LVM-Rujcq0UkmlYflKsC4fd33Dkl5dBRwjS65A8s9BZ4s4y1kvUR8RL7YEeRphzA7scE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637', 'dm-uuid-LVM-iTQuPrx0FTrMFHWPXcV7DY3IVPYTbJBCy5T5YfzJ7HdIkSfe6dduLErg3NlIYd5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306172 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306203 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306208 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_2b0adbd3-f096-4fe4-bf33-7e5f2ba17c5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306235 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--96f57bfe--16b3--5bb1--823a--e63af6581955-osd--block--96f57bfe--16b3--5bb1--823a--e63af6581955'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eH03hQ-AV7L-sq1w-ZK3M-bMB9-XpM9-NquE15', 'scsi-0QEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb', 'scsi-SQEMU_QEMU_HARDDISK_4dc8db36-ef1a-4565-8e9f-1534b8544abb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e44d1cae--1e57--574a--aa47--ecf7991dd637-osd--block--e44d1cae--1e57--574a--aa47--ecf7991dd637'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sHkOms-DWVH-8KmS-PD8X-N66N-HIm4-33sBky', 'scsi-0QEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6', 'scsi-SQEMU_QEMU_HARDDISK_bc9ba819-f1d8-4743-b965-7bb37c5542e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e', 'scsi-SQEMU_QEMU_HARDDISK_75141bd6-71d6-4da2-92fc-ccdb5e69cb7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-04-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:57:10.306263 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306267 | orchestrator | 2026-01-07 00:57:10.306271 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 00:57:10.306276 | orchestrator | Wednesday 07 January 2026 00:55:16 +0000 (0:00:00.603) 0:00:15.443 ***** 2026-01-07 00:57:10.306280 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.306285 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.306289 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.306293 | orchestrator | 2026-01-07 00:57:10.306297 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 00:57:10.306301 | orchestrator | Wednesday 07 January 2026 00:55:17 +0000 (0:00:00.719) 0:00:16.163 ***** 2026-01-07 00:57:10.306306 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.306310 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.306314 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.306318 | orchestrator | 2026-01-07 00:57:10.306322 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:57:10.306326 | orchestrator | Wednesday 07 January 2026 00:55:17 +0000 (0:00:00.451) 0:00:16.615 ***** 2026-01-07 00:57:10.306331 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.306335 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.306339 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.306343 | orchestrator | 2026-01-07 00:57:10.306347 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:57:10.306351 | orchestrator | Wednesday 07 January 2026 00:55:18 +0000 (0:00:00.652) 0:00:17.268 ***** 2026-01-07 00:57:10.306356 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306360 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306364 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306368 | orchestrator | 2026-01-07 00:57:10.306372 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:57:10.306376 | orchestrator | Wednesday 07 January 2026 00:55:18 +0000 (0:00:00.285) 0:00:17.553 ***** 2026-01-07 00:57:10.306381 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306385 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306389 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306393 | orchestrator | 2026-01-07 00:57:10.306397 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:57:10.306401 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.385) 0:00:17.939 ***** 2026-01-07 00:57:10.306405 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306409 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306413 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306418 | orchestrator | 2026-01-07 00:57:10.306422 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 00:57:10.306426 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.460) 0:00:18.399 ***** 2026-01-07 00:57:10.306430 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 00:57:10.306434 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 00:57:10.306438 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 00:57:10.306442 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 00:57:10.306446 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 00:57:10.306450 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 00:57:10.306454 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 00:57:10.306459 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 00:57:10.306466 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 00:57:10.306470 | orchestrator | 2026-01-07 00:57:10.306475 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 00:57:10.306479 | orchestrator | Wednesday 07 January 2026 00:55:20 +0000 (0:00:00.824) 0:00:19.223 ***** 2026-01-07 00:57:10.306488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:57:10.306493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:57:10.306497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:57:10.306501 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:57:10.306509 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:57:10.306514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:57:10.306518 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:57:10.306526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:57:10.306530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:57:10.306534 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306538 | orchestrator | 2026-01-07 00:57:10.306542 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 00:57:10.306546 | orchestrator | Wednesday 07 January 2026 00:55:20 +0000 (0:00:00.363) 0:00:19.587 ***** 2026-01-07 00:57:10.306551 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:57:10.306556 | orchestrator | 2026-01-07 00:57:10.306560 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:57:10.306565 | orchestrator | Wednesday 07 January 2026 00:55:21 +0000 (0:00:00.636) 0:00:20.223 ***** 2026-01-07 00:57:10.306572 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306576 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306581 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306585 | orchestrator | 2026-01-07 00:57:10.306591 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:57:10.306596 | orchestrator | Wednesday 07 January 2026 00:55:21 +0000 (0:00:00.294) 0:00:20.518 ***** 2026-01-07 00:57:10.306600 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306604 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306608 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306612 | orchestrator | 2026-01-07 00:57:10.306616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:57:10.306621 | orchestrator | Wednesday 07 January 2026 00:55:21 +0000 (0:00:00.286) 0:00:20.805 ***** 2026-01-07 00:57:10.306625 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306629 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306633 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:57:10.306638 | orchestrator | 2026-01-07 00:57:10.306643 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:57:10.306648 | orchestrator | Wednesday 07 January 2026 00:55:22 +0000 (0:00:00.284) 0:00:21.089 ***** 2026-01-07 00:57:10.306653 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.306658 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.306663 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.306667 | orchestrator | 2026-01-07 00:57:10.306672 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:57:10.306677 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.805) 0:00:21.895 ***** 2026-01-07 00:57:10.306682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:57:10.306687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:57:10.306691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:57:10.306726 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306732 | orchestrator | 2026-01-07 00:57:10.306737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:57:10.306742 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.372) 0:00:22.267 ***** 2026-01-07 00:57:10.306747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:57:10.306752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:57:10.306757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:57:10.306762 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306767 | orchestrator | 2026-01-07 00:57:10.306772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:57:10.306776 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.364) 0:00:22.632 ***** 2026-01-07 00:57:10.306781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:57:10.306786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:57:10.306791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:57:10.306796 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306801 | orchestrator | 2026-01-07 00:57:10.306806 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:57:10.306810 | orchestrator | Wednesday 07 January 2026 00:55:24 +0000 (0:00:00.359) 0:00:22.992 ***** 2026-01-07 00:57:10.306815 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:57:10.306820 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:57:10.306824 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:57:10.306830 | orchestrator | 2026-01-07 00:57:10.306835 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:57:10.306840 | orchestrator | Wednesday 07 January 2026 00:55:24 +0000 (0:00:00.319) 0:00:23.311 ***** 2026-01-07 00:57:10.306845 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:57:10.306849 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:57:10.306854 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:57:10.306859 | orchestrator | 2026-01-07 00:57:10.306864 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 00:57:10.306868 | orchestrator | Wednesday 07 January 2026 00:55:24 +0000 (0:00:00.498) 0:00:23.809 ***** 2026-01-07 00:57:10.306873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:57:10.306879 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:57:10.306884 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:57:10.306889 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:57:10.306894 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:57:10.306899 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:57:10.306903 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:57:10.306908 | orchestrator | 2026-01-07 00:57:10.306913 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 00:57:10.306918 | orchestrator | Wednesday 07 January 2026 00:55:25 +0000 (0:00:00.935) 0:00:24.744 ***** 2026-01-07 00:57:10.306922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:57:10.306927 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:57:10.306932 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:57:10.306936 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:57:10.306941 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:57:10.306946 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:57:10.306957 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:57:10.306962 | orchestrator | 2026-01-07 00:57:10.306969 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-07 00:57:10.306974 | orchestrator | Wednesday 07 January 2026 00:55:27 +0000 (0:00:01.904) 0:00:26.649 ***** 2026-01-07 00:57:10.306979 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:57:10.306984 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:57:10.306989 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-07 00:57:10.306994 | orchestrator | 2026-01-07 00:57:10.306998 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-07 00:57:10.307004 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:00.374) 0:00:27.024 ***** 2026-01-07 00:57:10.307009 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:57:10.307015 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:57:10.307020 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:57:10.307026 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:57:10.307031 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:57:10.307036 | orchestrator | 2026-01-07 00:57:10.307040 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-07 00:57:10.307044 | orchestrator | Wednesday 07 January 2026 00:56:12 +0000 (0:00:44.623) 0:01:11.647 ***** 2026-01-07 00:57:10.307049 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307057 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307061 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307065 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307073 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-07 00:57:10.307077 | orchestrator | 2026-01-07 00:57:10.307082 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-07 00:57:10.307086 | orchestrator | Wednesday 07 January 2026 00:56:36 +0000 (0:00:23.667) 0:01:35.315 ***** 2026-01-07 00:57:10.307090 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307094 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307110 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307114 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307118 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:57:10.307122 | orchestrator | 2026-01-07 00:57:10.307127 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-07 00:57:10.307131 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:12.157) 0:01:47.472 ***** 2026-01-07 00:57:10.307135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307139 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307143 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307152 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307159 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307166 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307171 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307175 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307180 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307184 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307188 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307197 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307201 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307206 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:57:10.307210 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:57:10.307214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:57:10.307219 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-07 00:57:10.307223 | orchestrator | 2026-01-07 00:57:10.307227 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:57:10.307232 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 00:57:10.307237 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-07 00:57:10.307241 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 00:57:10.307246 | orchestrator | 2026-01-07 00:57:10.307250 | orchestrator | 2026-01-07 00:57:10.307254 | orchestrator | 2026-01-07 00:57:10.307258 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:57:10.307263 | orchestrator | Wednesday 07 January 2026 00:57:06 +0000 (0:00:18.213) 0:02:05.686 ***** 2026-01-07 00:57:10.307267 | orchestrator | =============================================================================== 2026-01-07 00:57:10.307271 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.62s 2026-01-07 00:57:10.307280 | orchestrator | generate keys ---------------------------------------------------------- 23.67s 2026-01-07 00:57:10.307284 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.21s 2026-01-07 00:57:10.307288 | orchestrator | get keys from monitors ------------------------------------------------- 12.16s 2026-01-07 00:57:10.307292 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.92s 2026-01-07 00:57:10.307297 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2026-01-07 00:57:10.307301 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2026-01-07 00:57:10.307305 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2026-01-07 00:57:10.307310 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-01-07 00:57:10.307314 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2026-01-07 00:57:10.307318 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.81s 2026-01-07 00:57:10.307322 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-01-07 00:57:10.307326 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.70s 2026-01-07 00:57:10.307330 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.65s 2026-01-07 00:57:10.307334 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-01-07 00:57:10.307339 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.64s 2026-01-07 00:57:10.307343 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2026-01-07 00:57:10.307347 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2026-01-07 00:57:10.307351 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.52s 2026-01-07 00:57:10.307355 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.52s 2026-01-07 00:57:10.307359 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:10.307364 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:10.307368 | orchestrator | 2026-01-07 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:13.347423 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:13.349048 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:13.350272 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:13.350805 | orchestrator | 2026-01-07 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:16.384009 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:16.385037 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:16.385973 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:16.386048 | orchestrator | 2026-01-07 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:19.419800 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:19.420359 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:19.421483 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:19.421524 | orchestrator | 2026-01-07 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:22.465310 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:22.466136 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:22.470287 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:22.470392 | orchestrator | 2026-01-07 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:25.524199 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:25.526785 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:25.528933 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:25.529067 | orchestrator | 2026-01-07 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:28.585316 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:28.585866 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:28.587113 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:28.587213 | orchestrator | 2026-01-07 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:31.646997 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:31.648401 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:31.650693 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:31.651009 | orchestrator | 2026-01-07 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:34.709133 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:34.722064 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:34.722134 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:34.722144 | orchestrator | 2026-01-07 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:37.767371 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:37.767829 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:37.769995 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:37.770058 | orchestrator | 2026-01-07 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:40.823580 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:40.824608 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:40.826671 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:40.826723 | orchestrator | 2026-01-07 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:43.881793 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:43.883743 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:43.885912 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state STARTED 2026-01-07 00:57:43.885986 | orchestrator | 2026-01-07 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:46.947113 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:57:46.948050 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:46.949833 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:46.952533 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task 62f8cf83-d782-41b1-b578-10bfd7ed6f45 is in state SUCCESS 2026-01-07 00:57:46.952584 | orchestrator | 2026-01-07 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:50.006103 | orchestrator | 2026-01-07 00:57:50 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:57:50.008083 | orchestrator | 2026-01-07 00:57:50 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:50.010678 | orchestrator | 2026-01-07 00:57:50 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:50.010744 | orchestrator | 2026-01-07 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:53.048142 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:57:53.049967 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:53.050805 | orchestrator | 2026-01-07 00:57:53 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:53.050981 | orchestrator | 2026-01-07 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:56.097892 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:57:56.100335 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:56.102291 | orchestrator | 2026-01-07 00:57:56 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:56.102732 | orchestrator | 2026-01-07 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:59.143840 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:57:59.144840 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:57:59.146163 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:57:59.146228 | orchestrator | 2026-01-07 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:02.187254 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:02.187307 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:02.187367 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state STARTED 2026-01-07 00:58:02.190980 | orchestrator | 2026-01-07 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:05.236086 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:05.237669 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:05.239841 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task 6df70600-698b-4c78-9d6e-9e5429bc92a5 is in state SUCCESS 2026-01-07 00:58:05.241440 | orchestrator | 2026-01-07 00:58:05.241494 | orchestrator | 2026-01-07 00:58:05.241503 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-07 00:58:05.241512 | orchestrator | 2026-01-07 00:58:05.241526 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-07 00:58:05.241534 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-01-07 00:58:05.241555 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 00:58:05.241611 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241617 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241623 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 00:58:05.241630 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 00:58:05.241643 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 00:58:05.241648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 00:58:05.241654 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 00:58:05.241661 | orchestrator | 2026-01-07 00:58:05.241666 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-07 00:58:05.241673 | orchestrator | Wednesday 07 January 2026 00:57:17 +0000 (0:00:05.748) 0:00:05.903 ***** 2026-01-07 00:58:05.241679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 00:58:05.241686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241692 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241698 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 00:58:05.241705 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241711 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 00:58:05.241717 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 00:58:05.241724 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 00:58:05.241784 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 00:58:05.241792 | orchestrator | 2026-01-07 00:58:05.241796 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-07 00:58:05.241800 | orchestrator | Wednesday 07 January 2026 00:57:21 +0000 (0:00:04.094) 0:00:09.998 ***** 2026-01-07 00:58:05.241804 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:58:05.241808 | orchestrator | 2026-01-07 00:58:05.241822 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-07 00:58:05.241858 | orchestrator | Wednesday 07 January 2026 00:57:22 +0000 (0:00:00.945) 0:00:10.944 ***** 2026-01-07 00:58:05.241866 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-07 00:58:05.241872 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241878 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241884 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 00:58:05.241891 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.241898 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-07 00:58:05.241903 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-07 00:58:05.241909 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-07 00:58:05.241914 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-07 00:58:05.242239 | orchestrator | 2026-01-07 00:58:05.242249 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-07 00:58:05.242255 | orchestrator | Wednesday 07 January 2026 00:57:35 +0000 (0:00:13.089) 0:00:24.033 ***** 2026-01-07 00:58:05.242259 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-07 00:58:05.242265 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-07 00:58:05.242270 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 00:58:05.242274 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 00:58:05.242289 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 00:58:05.242294 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 00:58:05.242299 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-07 00:58:05.242310 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-07 00:58:05.242315 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-07 00:58:05.242319 | orchestrator | 2026-01-07 00:58:05.242324 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-07 00:58:05.242330 | orchestrator | Wednesday 07 January 2026 00:57:38 +0000 (0:00:02.949) 0:00:26.982 ***** 2026-01-07 00:58:05.242337 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-07 00:58:05.242346 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.242355 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.242361 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 00:58:05.242367 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 00:58:05.242373 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-07 00:58:05.242379 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-07 00:58:05.242385 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-07 00:58:05.242391 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-07 00:58:05.242395 | orchestrator | 2026-01-07 00:58:05.242399 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:58:05.242403 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:58:05.242425 | orchestrator | 2026-01-07 00:58:05.242429 | orchestrator | 2026-01-07 00:58:05.242433 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:58:05.242437 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:06.859) 0:00:33.842 ***** 2026-01-07 00:58:05.242441 | orchestrator | =============================================================================== 2026-01-07 00:58:05.242445 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.09s 2026-01-07 00:58:05.242449 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.86s 2026-01-07 00:58:05.242452 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.75s 2026-01-07 00:58:05.242456 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2026-01-07 00:58:05.242460 | orchestrator | Check if target directories exist --------------------------------------- 2.95s 2026-01-07 00:58:05.242464 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2026-01-07 00:58:05.242468 | orchestrator | 2026-01-07 00:58:05.242471 | orchestrator | 2026-01-07 00:58:05.242475 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:58:05.242479 | orchestrator | 2026-01-07 00:58:05.242483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:58:05.242487 | orchestrator | Wednesday 07 January 2026 00:56:23 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-01-07 00:58:05.242490 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.242495 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.242499 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.242502 | orchestrator | 2026-01-07 00:58:05.242506 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:58:05.242510 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.261) 0:00:00.487 ***** 2026-01-07 00:58:05.242514 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-07 00:58:05.242518 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-07 00:58:05.242522 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-07 00:58:05.242526 | orchestrator | 2026-01-07 00:58:05.242530 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-07 00:58:05.242533 | orchestrator | 2026-01-07 00:58:05.242537 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 00:58:05.242541 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.332) 0:00:00.820 ***** 2026-01-07 00:58:05.242545 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:05.242549 | orchestrator | 2026-01-07 00:58:05.242553 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-07 00:58:05.242557 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.461) 0:00:01.281 ***** 2026-01-07 00:58:05.242646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.242660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.242674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.242685 | orchestrator | 2026-01-07 00:58:05.242689 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-07 00:58:05.242693 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.929) 0:00:02.211 ***** 2026-01-07 00:58:05.242697 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.242701 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.242704 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.242708 | orchestrator | 2026-01-07 00:58:05.242712 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 00:58:05.242716 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.319) 0:00:02.531 ***** 2026-01-07 00:58:05.242720 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 00:58:05.242724 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 00:58:05.242728 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 00:58:05.242732 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 00:58:05.242735 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 00:58:05.242739 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 00:58:05.242743 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-07 00:58:05.242747 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 00:58:05.242751 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 00:58:05.242754 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 00:58:05.242758 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 00:58:05.242762 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 00:58:05.242766 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 00:58:05.242769 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 00:58:05.242773 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-07 00:58:05.242777 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 00:58:05.242781 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 00:58:05.242788 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 00:58:05.242792 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 00:58:05.242796 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 00:58:05.242802 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 00:58:05.242806 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 00:58:05.242810 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-07 00:58:05.242816 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 00:58:05.242821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-07 00:58:05.242826 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-07 00:58:05.242831 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-07 00:58:05.242834 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-07 00:58:05.242838 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-07 00:58:05.242842 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-07 00:58:05.242846 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-07 00:58:05.242850 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-07 00:58:05.242854 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-07 00:58:05.242860 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-07 00:58:05.242866 | orchestrator | 2026-01-07 00:58:05.242872 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.242878 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.644) 0:00:03.175 ***** 2026-01-07 00:58:05.242884 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.242890 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.242896 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.242902 | orchestrator | 2026-01-07 00:58:05.242908 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.242914 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.245) 0:00:03.421 ***** 2026-01-07 00:58:05.242920 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.242925 | orchestrator | 2026-01-07 00:58:05.242931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.242938 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.108) 0:00:03.529 ***** 2026-01-07 00:58:05.242944 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.242950 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.242956 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.242961 | orchestrator | 2026-01-07 00:58:05.242968 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.242978 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.349) 0:00:03.879 ***** 2026-01-07 00:58:05.242983 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.242988 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.242995 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243000 | orchestrator | 2026-01-07 00:58:05.243006 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243013 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.261) 0:00:04.140 ***** 2026-01-07 00:58:05.243020 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243026 | orchestrator | 2026-01-07 00:58:05.243032 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243038 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.115) 0:00:04.256 ***** 2026-01-07 00:58:05.243043 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243049 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243056 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243061 | orchestrator | 2026-01-07 00:58:05.243069 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243075 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.239) 0:00:04.495 ***** 2026-01-07 00:58:05.243080 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243086 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243091 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243097 | orchestrator | 2026-01-07 00:58:05.243103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243110 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.232) 0:00:04.728 ***** 2026-01-07 00:58:05.243115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243121 | orchestrator | 2026-01-07 00:58:05.243127 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243133 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.217) 0:00:04.946 ***** 2026-01-07 00:58:05.243144 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243150 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243156 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243162 | orchestrator | 2026-01-07 00:58:05.243168 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243174 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.205) 0:00:05.151 ***** 2026-01-07 00:58:05.243181 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243191 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243199 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243205 | orchestrator | 2026-01-07 00:58:05.243211 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243218 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.222) 0:00:05.374 ***** 2026-01-07 00:58:05.243224 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243230 | orchestrator | 2026-01-07 00:58:05.243236 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243242 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.105) 0:00:05.480 ***** 2026-01-07 00:58:05.243249 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243255 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243262 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243268 | orchestrator | 2026-01-07 00:58:05.243275 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243281 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.203) 0:00:05.683 ***** 2026-01-07 00:58:05.243288 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243294 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243301 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243307 | orchestrator | 2026-01-07 00:58:05.243314 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243325 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.313) 0:00:05.997 ***** 2026-01-07 00:58:05.243331 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243354 | orchestrator | 2026-01-07 00:58:05.243360 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243367 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.106) 0:00:06.104 ***** 2026-01-07 00:58:05.243372 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243378 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243385 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243391 | orchestrator | 2026-01-07 00:58:05.243397 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243404 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:00.219) 0:00:06.324 ***** 2026-01-07 00:58:05.243409 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243416 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243423 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243429 | orchestrator | 2026-01-07 00:58:05.243436 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243442 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:00.252) 0:00:06.576 ***** 2026-01-07 00:58:05.243448 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243454 | orchestrator | 2026-01-07 00:58:05.243460 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243467 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:00.107) 0:00:06.684 ***** 2026-01-07 00:58:05.243473 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243479 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243486 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243493 | orchestrator | 2026-01-07 00:58:05.243499 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243505 | orchestrator | Wednesday 07 January 2026 00:56:30 +0000 (0:00:00.231) 0:00:06.915 ***** 2026-01-07 00:58:05.243511 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243517 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243523 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243529 | orchestrator | 2026-01-07 00:58:05.243535 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243540 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:00.382) 0:00:07.297 ***** 2026-01-07 00:58:05.243547 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243553 | orchestrator | 2026-01-07 00:58:05.243559 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243586 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:00.098) 0:00:07.396 ***** 2026-01-07 00:58:05.243593 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243599 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243606 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243612 | orchestrator | 2026-01-07 00:58:05.243620 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243626 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:00.242) 0:00:07.638 ***** 2026-01-07 00:58:05.243632 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243638 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243645 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243650 | orchestrator | 2026-01-07 00:58:05.243657 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243662 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:00.264) 0:00:07.903 ***** 2026-01-07 00:58:05.243669 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243674 | orchestrator | 2026-01-07 00:58:05.243680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243686 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:00.112) 0:00:08.015 ***** 2026-01-07 00:58:05.243692 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243704 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243711 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243717 | orchestrator | 2026-01-07 00:58:05.243723 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243729 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:00.244) 0:00:08.260 ***** 2026-01-07 00:58:05.243735 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243740 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243746 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243752 | orchestrator | 2026-01-07 00:58:05.243765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243771 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:00.412) 0:00:08.672 ***** 2026-01-07 00:58:05.243778 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243784 | orchestrator | 2026-01-07 00:58:05.243789 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243800 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:00.098) 0:00:08.771 ***** 2026-01-07 00:58:05.243806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243812 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243818 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243824 | orchestrator | 2026-01-07 00:58:05.243830 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 00:58:05.243837 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:00.257) 0:00:09.029 ***** 2026-01-07 00:58:05.243842 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:05.243849 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:05.243856 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:05.243861 | orchestrator | 2026-01-07 00:58:05.243866 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 00:58:05.243873 | orchestrator | Wednesday 07 January 2026 00:56:33 +0000 (0:00:00.304) 0:00:09.333 ***** 2026-01-07 00:58:05.243879 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243884 | orchestrator | 2026-01-07 00:58:05.243890 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 00:58:05.243895 | orchestrator | Wednesday 07 January 2026 00:56:33 +0000 (0:00:00.115) 0:00:09.449 ***** 2026-01-07 00:58:05.243901 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.243908 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.243914 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.243919 | orchestrator | 2026-01-07 00:58:05.243925 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-07 00:58:05.243930 | orchestrator | Wednesday 07 January 2026 00:56:33 +0000 (0:00:00.466) 0:00:09.916 ***** 2026-01-07 00:58:05.243936 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:05.243942 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:05.243949 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:05.243955 | orchestrator | 2026-01-07 00:58:05.243961 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-07 00:58:05.243967 | orchestrator | Wednesday 07 January 2026 00:56:35 +0000 (0:00:01.772) 0:00:11.689 ***** 2026-01-07 00:58:05.243972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 00:58:05.243978 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 00:58:05.243984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 00:58:05.243990 | orchestrator | 2026-01-07 00:58:05.243995 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-07 00:58:05.244001 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:01.642) 0:00:13.331 ***** 2026-01-07 00:58:05.244007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 00:58:05.244014 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 00:58:05.244025 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 00:58:05.244032 | orchestrator | 2026-01-07 00:58:05.244037 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-07 00:58:05.244043 | orchestrator | Wednesday 07 January 2026 00:56:39 +0000 (0:00:02.341) 0:00:15.673 ***** 2026-01-07 00:58:05.244050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 00:58:05.244056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 00:58:05.244062 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 00:58:05.244068 | orchestrator | 2026-01-07 00:58:05.244075 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-07 00:58:05.244081 | orchestrator | Wednesday 07 January 2026 00:56:41 +0000 (0:00:02.128) 0:00:17.801 ***** 2026-01-07 00:58:05.244087 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244092 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244099 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244105 | orchestrator | 2026-01-07 00:58:05.244111 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-07 00:58:05.244118 | orchestrator | Wednesday 07 January 2026 00:56:41 +0000 (0:00:00.296) 0:00:18.098 ***** 2026-01-07 00:58:05.244125 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244131 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244138 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244145 | orchestrator | 2026-01-07 00:58:05.244151 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 00:58:05.244157 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:00.309) 0:00:18.408 ***** 2026-01-07 00:58:05.244165 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:05.244172 | orchestrator | 2026-01-07 00:58:05.244178 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-07 00:58:05.244185 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:00.734) 0:00:19.142 ***** 2026-01-07 00:58:05.244211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244263 | orchestrator | 2026-01-07 00:58:05.244269 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-07 00:58:05.244275 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:01.585) 0:00:20.728 ***** 2026-01-07 00:58:05.244293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244301 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244320 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244344 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244351 | orchestrator | 2026-01-07 00:58:05.244356 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-07 00:58:05.244369 | orchestrator | Wednesday 07 January 2026 00:56:45 +0000 (0:00:00.795) 0:00:21.524 ***** 2026-01-07 00:58:05.244376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244382 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244412 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244427 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244433 | orchestrator | 2026-01-07 00:58:05.244440 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-07 00:58:05.244446 | orchestrator | Wednesday 07 January 2026 00:56:46 +0000 (0:00:00.870) 0:00:22.394 ***** 2026-01-07 00:58:05.244461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:58:05.244502 | orchestrator | 2026-01-07 00:58:05.244509 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-07 00:58:05.244516 | orchestrator | Wednesday 07 January 2026 00:56:47 +0000 (0:00:01.478) 0:00:23.873 ***** 2026-01-07 00:58:05.244522 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:58:05.244529 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:05.244536 | orchestrator | } 2026-01-07 00:58:05.244542 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:58:05.244549 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:05.244555 | orchestrator | } 2026-01-07 00:58:05.244612 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:58:05.244623 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:05.244629 | orchestrator | } 2026-01-07 00:58:05.244635 | orchestrator | 2026-01-07 00:58:05.244642 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:58:05.244648 | orchestrator | Wednesday 07 January 2026 00:56:47 +0000 (0:00:00.306) 0:00:24.179 ***** 2026-01-07 00:58:05.244665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244678 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244693 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:58:05.244724 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244731 | orchestrator | 2026-01-07 00:58:05.244737 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 00:58:05.244743 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:01.016) 0:00:25.196 ***** 2026-01-07 00:58:05.244749 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:05.244755 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:05.244761 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:05.244767 | orchestrator | 2026-01-07 00:58:05.244773 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 00:58:05.244778 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:00.515) 0:00:25.712 ***** 2026-01-07 00:58:05.244785 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:05.244791 | orchestrator | 2026-01-07 00:58:05.244798 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-07 00:58:05.244804 | orchestrator | Wednesday 07 January 2026 00:56:50 +0000 (0:00:00.604) 0:00:26.316 ***** 2026-01-07 00:58:05.244809 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:05.244816 | orchestrator | 2026-01-07 00:58:05.244822 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-07 00:58:05.244828 | orchestrator | Wednesday 07 January 2026 00:56:52 +0000 (0:00:02.494) 0:00:28.810 ***** 2026-01-07 00:58:05.244834 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:05.244840 | orchestrator | 2026-01-07 00:58:05.244846 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-07 00:58:05.244852 | orchestrator | Wednesday 07 January 2026 00:56:55 +0000 (0:00:02.610) 0:00:31.421 ***** 2026-01-07 00:58:05.244859 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:05.244865 | orchestrator | 2026-01-07 00:58:05.244871 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 00:58:05.244878 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:16.361) 0:00:47.782 ***** 2026-01-07 00:58:05.244883 | orchestrator | 2026-01-07 00:58:05.244890 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 00:58:05.244896 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.063) 0:00:47.846 ***** 2026-01-07 00:58:05.244903 | orchestrator | 2026-01-07 00:58:05.244909 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 00:58:05.244915 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.213) 0:00:48.060 ***** 2026-01-07 00:58:05.244926 | orchestrator | 2026-01-07 00:58:05.244932 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-07 00:58:05.244939 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.066) 0:00:48.127 ***** 2026-01-07 00:58:05.244945 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:05.244951 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:05.244957 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:05.244963 | orchestrator | 2026-01-07 00:58:05.244969 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:58:05.244977 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-07 00:58:05.244985 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-07 00:58:05.244996 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-07 00:58:05.245001 | orchestrator | 2026-01-07 00:58:05.245008 | orchestrator | 2026-01-07 00:58:05.245015 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:58:05.245020 | orchestrator | Wednesday 07 January 2026 00:58:03 +0000 (0:00:51.535) 0:01:39.662 ***** 2026-01-07 00:58:05.245026 | orchestrator | =============================================================================== 2026-01-07 00:58:05.245036 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.54s 2026-01-07 00:58:05.245043 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.36s 2026-01-07 00:58:05.245050 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.61s 2026-01-07 00:58:05.245056 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.49s 2026-01-07 00:58:05.245062 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.34s 2026-01-07 00:58:05.245069 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.13s 2026-01-07 00:58:05.245075 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.77s 2026-01-07 00:58:05.245081 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.64s 2026-01-07 00:58:05.245088 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.59s 2026-01-07 00:58:05.245095 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.48s 2026-01-07 00:58:05.245100 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.02s 2026-01-07 00:58:05.245107 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.93s 2026-01-07 00:58:05.245113 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2026-01-07 00:58:05.245120 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.80s 2026-01-07 00:58:05.245126 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-01-07 00:58:05.245132 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-01-07 00:58:05.245138 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-01-07 00:58:05.245144 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-07 00:58:05.245150 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2026-01-07 00:58:05.245156 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.46s 2026-01-07 00:58:05.245162 | orchestrator | 2026-01-07 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:08.285442 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:08.287785 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:08.287914 | orchestrator | 2026-01-07 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:11.336287 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:11.337208 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:11.337267 | orchestrator | 2026-01-07 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:14.381327 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:14.383592 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:14.383641 | orchestrator | 2026-01-07 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:17.429850 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:17.432726 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:17.432836 | orchestrator | 2026-01-07 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:20.479467 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:20.481257 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:20.481318 | orchestrator | 2026-01-07 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:23.529037 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:23.530821 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:23.530859 | orchestrator | 2026-01-07 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:26.569154 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:26.571986 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:26.572042 | orchestrator | 2026-01-07 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:29.615659 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:29.618141 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:29.618176 | orchestrator | 2026-01-07 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:32.667234 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:32.668104 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:32.668417 | orchestrator | 2026-01-07 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:35.715216 | orchestrator | 2026-01-07 00:58:35 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:35.717941 | orchestrator | 2026-01-07 00:58:35 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state STARTED 2026-01-07 00:58:35.717995 | orchestrator | 2026-01-07 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:38.764114 | orchestrator | 2026-01-07 00:58:38 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state STARTED 2026-01-07 00:58:38.769724 | orchestrator | 2026-01-07 00:58:38 | INFO  | Task b44529c5-bcb8-411d-addb-983f372c395b is in state SUCCESS 2026-01-07 00:58:38.774300 | orchestrator | 2026-01-07 00:58:38.774391 | orchestrator | 2026-01-07 00:58:38.774400 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:58:38.774408 | orchestrator | 2026-01-07 00:58:38.774415 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:58:38.774422 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.224) 0:00:00.224 ***** 2026-01-07 00:58:38.774427 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.774433 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:38.774437 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:38.774441 | orchestrator | 2026-01-07 00:58:38.774445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:58:38.774449 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.237) 0:00:00.462 ***** 2026-01-07 00:58:38.774453 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 00:58:38.774458 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 00:58:38.774462 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 00:58:38.774466 | orchestrator | 2026-01-07 00:58:38.774469 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-07 00:58:38.774473 | orchestrator | 2026-01-07 00:58:38.774477 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.774481 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.351) 0:00:00.813 ***** 2026-01-07 00:58:38.774499 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:38.774505 | orchestrator | 2026-01-07 00:58:38.774511 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-07 00:58:38.774517 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.457) 0:00:01.271 ***** 2026-01-07 00:58:38.774528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774780 | orchestrator | 2026-01-07 00:58:38.774784 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-07 00:58:38.774792 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:01.667) 0:00:02.938 ***** 2026-01-07 00:58:38.774796 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.774801 | orchestrator | 2026-01-07 00:58:38.774805 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-07 00:58:38.774809 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.122) 0:00:03.060 ***** 2026-01-07 00:58:38.774813 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.774816 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.774820 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.774824 | orchestrator | 2026-01-07 00:58:38.774828 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-07 00:58:38.774832 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:00.325) 0:00:03.386 ***** 2026-01-07 00:58:38.774837 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:58:38.774841 | orchestrator | 2026-01-07 00:58:38.774845 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.774851 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.636) 0:00:04.023 ***** 2026-01-07 00:58:38.774855 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:38.774860 | orchestrator | 2026-01-07 00:58:38.774864 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-07 00:58:38.774868 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:00.474) 0:00:04.497 ***** 2026-01-07 00:58:38.774873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.774903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.774926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775248 | orchestrator | 2026-01-07 00:58:38.775252 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-07 00:58:38.775258 | orchestrator | Wednesday 07 January 2026 00:56:31 +0000 (0:00:03.321) 0:00:07.819 ***** 2026-01-07 00:58:38.775276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775307 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775344 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.775350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775375 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.775381 | orchestrator | 2026-01-07 00:58:38.775387 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-07 00:58:38.775398 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:00.504) 0:00:08.323 ***** 2026-01-07 00:58:38.775404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775428 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775457 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.775465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775481 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.775509 | orchestrator | 2026-01-07 00:58:38.775514 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-07 00:58:38.775518 | orchestrator | Wednesday 07 January 2026 00:56:33 +0000 (0:00:00.729) 0:00:09.053 ***** 2026-01-07 00:58:38.775526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775616 | orchestrator | 2026-01-07 00:58:38.775621 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-07 00:58:38.775627 | orchestrator | Wednesday 07 January 2026 00:56:36 +0000 (0:00:03.341) 0:00:12.394 ***** 2026-01-07 00:58:38.775633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.775685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.775711 | orchestrator | 2026-01-07 00:58:38.775714 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-07 00:58:38.775718 | orchestrator | Wednesday 07 January 2026 00:56:41 +0000 (0:00:05.456) 0:00:17.851 ***** 2026-01-07 00:58:38.775722 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:38.775726 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.775730 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:38.775734 | orchestrator | 2026-01-07 00:58:38.775738 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-07 00:58:38.775743 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:01.340) 0:00:19.191 ***** 2026-01-07 00:58:38.775749 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775755 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.775763 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.775770 | orchestrator | 2026-01-07 00:58:38.775778 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-07 00:58:38.775787 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:00.536) 0:00:19.728 ***** 2026-01-07 00:58:38.775801 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.775813 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.775819 | orchestrator | 2026-01-07 00:58:38.775824 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-07 00:58:38.775831 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:00.321) 0:00:20.050 ***** 2026-01-07 00:58:38.775837 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775843 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.775849 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.775855 | orchestrator | 2026-01-07 00:58:38.775861 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-07 00:58:38.775866 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:00.469) 0:00:20.520 ***** 2026-01-07 00:58:38.775872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775897 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.775903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.775987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.775994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.775998 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.776008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.776017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.776028 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776034 | orchestrator | 2026-01-07 00:58:38.776040 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.776046 | orchestrator | Wednesday 07 January 2026 00:56:45 +0000 (0:00:00.639) 0:00:21.159 ***** 2026-01-07 00:58:38.776053 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.776059 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776065 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776072 | orchestrator | 2026-01-07 00:58:38.776079 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-07 00:58:38.776085 | orchestrator | Wednesday 07 January 2026 00:56:45 +0000 (0:00:00.307) 0:00:21.467 ***** 2026-01-07 00:58:38.776092 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 00:58:38.776104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 00:58:38.776111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 00:58:38.776117 | orchestrator | 2026-01-07 00:58:38.776124 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-07 00:58:38.776130 | orchestrator | Wednesday 07 January 2026 00:56:47 +0000 (0:00:01.514) 0:00:22.982 ***** 2026-01-07 00:58:38.776136 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:58:38.776142 | orchestrator | 2026-01-07 00:58:38.776148 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-07 00:58:38.776155 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:00.895) 0:00:23.877 ***** 2026-01-07 00:58:38.776160 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.776164 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776169 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776174 | orchestrator | 2026-01-07 00:58:38.776178 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-07 00:58:38.776182 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:00.840) 0:00:24.717 ***** 2026-01-07 00:58:38.776187 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:58:38.776191 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:58:38.776196 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:58:38.776204 | orchestrator | 2026-01-07 00:58:38.776212 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-07 00:58:38.776219 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:01.084) 0:00:25.802 ***** 2026-01-07 00:58:38.776225 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.776232 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:38.776237 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:38.776242 | orchestrator | 2026-01-07 00:58:38.776248 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-07 00:58:38.776253 | orchestrator | Wednesday 07 January 2026 00:56:50 +0000 (0:00:00.342) 0:00:26.144 ***** 2026-01-07 00:58:38.776259 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 00:58:38.776265 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 00:58:38.776271 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 00:58:38.776276 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 00:58:38.776281 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 00:58:38.776287 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 00:58:38.776293 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 00:58:38.776299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 00:58:38.776315 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 00:58:38.776321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 00:58:38.776327 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 00:58:38.776332 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 00:58:38.776339 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 00:58:38.776344 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 00:58:38.776352 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 00:58:38.776356 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 00:58:38.776360 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 00:58:38.776364 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 00:58:38.776367 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 00:58:38.776371 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 00:58:38.776375 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 00:58:38.776379 | orchestrator | 2026-01-07 00:58:38.776382 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-07 00:58:38.776386 | orchestrator | Wednesday 07 January 2026 00:56:59 +0000 (0:00:09.167) 0:00:35.311 ***** 2026-01-07 00:58:38.776390 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 00:58:38.776394 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 00:58:38.776398 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 00:58:38.776402 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 00:58:38.776411 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 00:58:38.776415 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 00:58:38.776419 | orchestrator | 2026-01-07 00:58:38.776422 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-07 00:58:38.776426 | orchestrator | Wednesday 07 January 2026 00:57:02 +0000 (0:00:03.097) 0:00:38.408 ***** 2026-01-07 00:58:38.776431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.776442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.776455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-07 00:58:38.776469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 00:58:38.776602 | orchestrator | 2026-01-07 00:58:38.776609 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-07 00:58:38.776614 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:02.442) 0:00:40.850 ***** 2026-01-07 00:58:38.776621 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 00:58:38.776627 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:38.776633 | orchestrator | } 2026-01-07 00:58:38.776638 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 00:58:38.776644 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:38.776650 | orchestrator | } 2026-01-07 00:58:38.776656 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 00:58:38.776662 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 00:58:38.776667 | orchestrator | } 2026-01-07 00:58:38.776673 | orchestrator | 2026-01-07 00:58:38.776678 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 00:58:38.776684 | orchestrator | Wednesday 07 January 2026 00:57:05 +0000 (0:00:00.344) 0:00:41.195 ***** 2026-01-07 00:58:38.776700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.776707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.776719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.776725 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.776735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.776742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.776752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.776758 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-07 00:58:38.776777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:58:38.776783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:58:38.776790 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776795 | orchestrator | 2026-01-07 00:58:38.776801 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.776805 | orchestrator | Wednesday 07 January 2026 00:57:06 +0000 (0:00:00.899) 0:00:42.094 ***** 2026-01-07 00:58:38.776809 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.776813 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776817 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776820 | orchestrator | 2026-01-07 00:58:38.776824 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-07 00:58:38.776831 | orchestrator | Wednesday 07 January 2026 00:57:06 +0000 (0:00:00.298) 0:00:42.392 ***** 2026-01-07 00:58:38.776835 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.776839 | orchestrator | 2026-01-07 00:58:38.776843 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-07 00:58:38.776847 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:02.054) 0:00:44.447 ***** 2026-01-07 00:58:38.776851 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.776854 | orchestrator | 2026-01-07 00:58:38.776859 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-07 00:58:38.776862 | orchestrator | Wednesday 07 January 2026 00:57:10 +0000 (0:00:02.080) 0:00:46.527 ***** 2026-01-07 00:58:38.776866 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:38.776870 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:38.776874 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.776878 | orchestrator | 2026-01-07 00:58:38.776882 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-07 00:58:38.776886 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.803) 0:00:47.331 ***** 2026-01-07 00:58:38.776889 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.776893 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:38.776897 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:38.776901 | orchestrator | 2026-01-07 00:58:38.776911 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-07 00:58:38.776915 | orchestrator | Wednesday 07 January 2026 00:57:11 +0000 (0:00:00.312) 0:00:47.643 ***** 2026-01-07 00:58:38.776919 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.776922 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.776926 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.776930 | orchestrator | 2026-01-07 00:58:38.776934 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-07 00:58:38.776941 | orchestrator | Wednesday 07 January 2026 00:57:12 +0000 (0:00:00.611) 0:00:48.254 ***** 2026-01-07 00:58:38.776945 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.776949 | orchestrator | 2026-01-07 00:58:38.776953 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-07 00:58:38.776957 | orchestrator | Wednesday 07 January 2026 00:57:27 +0000 (0:00:14.932) 0:01:03.187 ***** 2026-01-07 00:58:38.776961 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.776965 | orchestrator | 2026-01-07 00:58:38.776968 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 00:58:38.776972 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:11.881) 0:01:15.068 ***** 2026-01-07 00:58:38.776976 | orchestrator | 2026-01-07 00:58:38.776980 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 00:58:38.776984 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.063) 0:01:15.132 ***** 2026-01-07 00:58:38.776988 | orchestrator | 2026-01-07 00:58:38.776991 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 00:58:38.776995 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.067) 0:01:15.199 ***** 2026-01-07 00:58:38.776999 | orchestrator | 2026-01-07 00:58:38.777003 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-07 00:58:38.777007 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.067) 0:01:15.267 ***** 2026-01-07 00:58:38.777010 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.777014 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:38.777018 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:38.777022 | orchestrator | 2026-01-07 00:58:38.777028 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-07 00:58:38.777034 | orchestrator | Wednesday 07 January 2026 00:57:51 +0000 (0:00:12.303) 0:01:27.571 ***** 2026-01-07 00:58:38.777040 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.777045 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:38.777051 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:38.777057 | orchestrator | 2026-01-07 00:58:38.777063 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-07 00:58:38.777069 | orchestrator | Wednesday 07 January 2026 00:58:01 +0000 (0:00:09.879) 0:01:37.450 ***** 2026-01-07 00:58:38.777075 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.777081 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:38.777088 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:38.777094 | orchestrator | 2026-01-07 00:58:38.777100 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.777106 | orchestrator | Wednesday 07 January 2026 00:58:08 +0000 (0:00:06.736) 0:01:44.186 ***** 2026-01-07 00:58:38.777112 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:38.777119 | orchestrator | 2026-01-07 00:58:38.777124 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-07 00:58:38.777129 | orchestrator | Wednesday 07 January 2026 00:58:08 +0000 (0:00:00.559) 0:01:44.746 ***** 2026-01-07 00:58:38.777135 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.777140 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:38.777145 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:38.777151 | orchestrator | 2026-01-07 00:58:38.777156 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-07 00:58:38.777167 | orchestrator | Wednesday 07 January 2026 00:58:10 +0000 (0:00:01.126) 0:01:45.872 ***** 2026-01-07 00:58:38.777173 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:38.777179 | orchestrator | 2026-01-07 00:58:38.777184 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-07 00:58:38.777190 | orchestrator | Wednesday 07 January 2026 00:58:11 +0000 (0:00:01.640) 0:01:47.513 ***** 2026-01-07 00:58:38.777196 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-07 00:58:38.777201 | orchestrator | 2026-01-07 00:58:38.777207 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-01-07 00:58:38.777214 | orchestrator | Wednesday 07 January 2026 00:58:23 +0000 (0:00:12.152) 0:01:59.666 ***** 2026-01-07 00:58:38.777220 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-07 00:58:38.777226 | orchestrator | 2026-01-07 00:58:38.777233 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-01-07 00:58:38.777244 | orchestrator | Wednesday 07 January 2026 00:58:27 +0000 (0:00:03.853) 0:02:03.519 ***** 2026-01-07 00:58:38.777250 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-07 00:58:38.777256 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-07 00:58:38.777262 | orchestrator | 2026-01-07 00:58:38.777270 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-07 00:58:38.777274 | orchestrator | Wednesday 07 January 2026 00:58:33 +0000 (0:00:06.272) 0:02:09.792 ***** 2026-01-07 00:58:38.777278 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.777282 | orchestrator | 2026-01-07 00:58:38.777288 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-07 00:58:38.777294 | orchestrator | Wednesday 07 January 2026 00:58:34 +0000 (0:00:00.118) 0:02:09.911 ***** 2026-01-07 00:58:38.777299 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.777306 | orchestrator | 2026-01-07 00:58:38.777312 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-07 00:58:38.777318 | orchestrator | Wednesday 07 January 2026 00:58:34 +0000 (0:00:00.124) 0:02:10.036 ***** 2026-01-07 00:58:38.777324 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.777330 | orchestrator | 2026-01-07 00:58:38.777336 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-01-07 00:58:38.777342 | orchestrator | Wednesday 07 January 2026 00:58:34 +0000 (0:00:00.127) 0:02:10.163 ***** 2026-01-07 00:58:38.777348 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.777353 | orchestrator | 2026-01-07 00:58:38.777356 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-07 00:58:38.777367 | orchestrator | Wednesday 07 January 2026 00:58:34 +0000 (0:00:00.292) 0:02:10.456 ***** 2026-01-07 00:58:38.777371 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:38.777374 | orchestrator | 2026-01-07 00:58:38.777378 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 00:58:38.777382 | orchestrator | Wednesday 07 January 2026 00:58:37 +0000 (0:00:03.059) 0:02:13.515 ***** 2026-01-07 00:58:38.777386 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:38.777390 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:38.777394 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:38.777397 | orchestrator | 2026-01-07 00:58:38.777401 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:58:38.777407 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-07 00:58:38.777413 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-07 00:58:38.777417 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-07 00:58:38.777425 | orchestrator | 2026-01-07 00:58:38.777430 | orchestrator | 2026-01-07 00:58:38.777436 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:58:38.777441 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:00.687) 0:02:14.203 ***** 2026-01-07 00:58:38.777449 | orchestrator | =============================================================================== 2026-01-07 00:58:38.777459 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.93s 2026-01-07 00:58:38.777465 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.30s 2026-01-07 00:58:38.777470 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.15s 2026-01-07 00:58:38.777476 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.88s 2026-01-07 00:58:38.777482 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.88s 2026-01-07 00:58:38.777538 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.17s 2026-01-07 00:58:38.777545 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.74s 2026-01-07 00:58:38.777551 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 6.27s 2026-01-07 00:58:38.777556 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.46s 2026-01-07 00:58:38.777562 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.85s 2026-01-07 00:58:38.777568 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2026-01-07 00:58:38.777574 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.32s 2026-01-07 00:58:38.777580 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.10s 2026-01-07 00:58:38.777586 | orchestrator | keystone : Creating default user role ----------------------------------- 3.06s 2026-01-07 00:58:38.777592 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.44s 2026-01-07 00:58:38.777598 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.08s 2026-01-07 00:58:38.777604 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.05s 2026-01-07 00:58:38.777610 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.67s 2026-01-07 00:58:38.777616 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.64s 2026-01-07 00:58:38.777622 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.51s 2026-01-07 00:58:38.777628 | orchestrator | 2026-01-07 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:41.813120 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task e969c90d-09e9-4ca5-aebd-03ab2f9dd754 is in state SUCCESS 2026-01-07 00:58:41.813243 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:41.814073 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:41.814778 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:41.815444 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:41.815537 | orchestrator | 2026-01-07 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:44.842006 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:44.842327 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:44.842932 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:44.843837 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:58:44.844166 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:44.844260 | orchestrator | 2026-01-07 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:47.872757 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:47.872865 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:47.874915 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:47.875145 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:58:47.876149 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:47.876201 | orchestrator | 2026-01-07 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:50.922768 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:50.925416 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:50.928125 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:50.930370 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:58:50.931965 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:50.932125 | orchestrator | 2026-01-07 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:53.983098 | orchestrator | 2026-01-07 00:58:53 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:53.985402 | orchestrator | 2026-01-07 00:58:53 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:53.987217 | orchestrator | 2026-01-07 00:58:53 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:53.988656 | orchestrator | 2026-01-07 00:58:53 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:58:53.989909 | orchestrator | 2026-01-07 00:58:53 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:53.990196 | orchestrator | 2026-01-07 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:57.042981 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:58:57.044095 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:58:57.045968 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:58:57.047490 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:58:57.049164 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:58:57.049325 | orchestrator | 2026-01-07 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:00.091410 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:00.092670 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:00.093804 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:00.095143 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:00.096352 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:00.096844 | orchestrator | 2026-01-07 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:03.142079 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:03.144164 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:03.145390 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:03.146727 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:03.148125 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:03.148156 | orchestrator | 2026-01-07 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:06.190528 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:06.192034 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:06.193901 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:06.195369 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:06.197266 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:06.197302 | orchestrator | 2026-01-07 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:09.245220 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:09.247246 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:09.249091 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:09.251102 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:09.253459 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:09.253522 | orchestrator | 2026-01-07 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:12.303809 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:12.305471 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:12.307167 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:12.309025 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:12.310562 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:12.310611 | orchestrator | 2026-01-07 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:15.352906 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:15.354391 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:15.355927 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:15.357029 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:15.357940 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:15.357986 | orchestrator | 2026-01-07 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:18.399025 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:18.399085 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:18.401778 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:18.403205 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:18.406481 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:18.406542 | orchestrator | 2026-01-07 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:21.440696 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:21.442198 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:21.443521 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:21.444859 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:21.445747 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:21.445797 | orchestrator | 2026-01-07 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:24.487721 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:24.490503 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:24.492959 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:24.493668 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:24.497091 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:24.497143 | orchestrator | 2026-01-07 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:27.526710 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:27.529249 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:27.530493 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:27.534438 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:27.535201 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:27.535228 | orchestrator | 2026-01-07 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:30.570394 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:30.570677 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:30.573254 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:30.573916 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:30.574547 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:30.574613 | orchestrator | 2026-01-07 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:33.601454 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:33.601882 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:33.602660 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:33.603326 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:33.604104 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:33.604182 | orchestrator | 2026-01-07 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:36.635027 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:36.635518 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:36.636240 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:36.637183 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:36.637805 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:36.637831 | orchestrator | 2026-01-07 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:39.666260 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:39.666421 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:39.667756 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:39.668530 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:39.669215 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:39.669256 | orchestrator | 2026-01-07 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:42.696244 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:42.696774 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:42.697455 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:42.698092 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:42.698798 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:42.698922 | orchestrator | 2026-01-07 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:45.727852 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:45.728085 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:45.728801 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:45.729628 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:45.730223 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:45.730252 | orchestrator | 2026-01-07 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:48.758255 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:48.762806 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:48.763648 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:48.765313 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:48.767298 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:48.767522 | orchestrator | 2026-01-07 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:51.797686 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:51.797908 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:51.798844 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:51.799412 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:51.800531 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:51.800562 | orchestrator | 2026-01-07 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:54.831802 | orchestrator | 2026-01-07 00:59:54 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:54.832765 | orchestrator | 2026-01-07 00:59:54 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:54.833814 | orchestrator | 2026-01-07 00:59:54 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:54.835600 | orchestrator | 2026-01-07 00:59:54 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:54.836808 | orchestrator | 2026-01-07 00:59:54 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:54.836856 | orchestrator | 2026-01-07 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:57.879095 | orchestrator | 2026-01-07 00:59:57 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 00:59:57.879549 | orchestrator | 2026-01-07 00:59:57 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 00:59:57.880387 | orchestrator | 2026-01-07 00:59:57 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 00:59:57.881034 | orchestrator | 2026-01-07 00:59:57 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 00:59:57.882092 | orchestrator | 2026-01-07 00:59:57 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 00:59:57.882161 | orchestrator | 2026-01-07 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:00.933269 | orchestrator | 2026-01-07 01:00:00 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:00.934952 | orchestrator | 2026-01-07 01:00:00 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:00.936476 | orchestrator | 2026-01-07 01:00:00 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:00.937756 | orchestrator | 2026-01-07 01:00:00 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 01:00:00.939162 | orchestrator | 2026-01-07 01:00:00 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:00.939240 | orchestrator | 2026-01-07 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:03.975029 | orchestrator | 2026-01-07 01:00:03 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:03.975522 | orchestrator | 2026-01-07 01:00:03 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:03.976147 | orchestrator | 2026-01-07 01:00:03 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:03.977204 | orchestrator | 2026-01-07 01:00:03 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 01:00:03.978891 | orchestrator | 2026-01-07 01:00:03 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:03.978939 | orchestrator | 2026-01-07 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:07.014496 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:07.015624 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:07.016449 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:07.018125 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 01:00:07.018968 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:07.019129 | orchestrator | 2026-01-07 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:10.064538 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:10.065237 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:10.066079 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:10.066914 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 01:00:10.067785 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:10.067939 | orchestrator | 2026-01-07 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:13.124761 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:13.126125 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:13.129177 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:13.129850 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state STARTED 2026-01-07 01:00:13.130419 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:13.130625 | orchestrator | 2026-01-07 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:16.164643 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:16.164943 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:16.165597 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:16.166096 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task 770e550b-230f-4af2-8c5e-8a50935f8ce7 is in state SUCCESS 2026-01-07 01:00:16.166665 | orchestrator | 2026-01-07 01:00:16.166686 | orchestrator | 2026-01-07 01:00:16.166693 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-07 01:00:16.166699 | orchestrator | 2026-01-07 01:00:16.166705 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-07 01:00:16.166710 | orchestrator | Wednesday 07 January 2026 00:57:49 +0000 (0:00:00.170) 0:00:00.170 ***** 2026-01-07 01:00:16.166715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-07 01:00:16.166721 | orchestrator | 2026-01-07 01:00:16.166726 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-07 01:00:16.166731 | orchestrator | Wednesday 07 January 2026 00:57:49 +0000 (0:00:00.174) 0:00:00.345 ***** 2026-01-07 01:00:16.166736 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-07 01:00:16.166741 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-07 01:00:16.166747 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-07 01:00:16.166752 | orchestrator | 2026-01-07 01:00:16.166796 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-07 01:00:16.166804 | orchestrator | Wednesday 07 January 2026 00:57:50 +0000 (0:00:01.198) 0:00:01.543 ***** 2026-01-07 01:00:16.166809 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-07 01:00:16.166815 | orchestrator | 2026-01-07 01:00:16.166820 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-07 01:00:16.166826 | orchestrator | Wednesday 07 January 2026 00:57:52 +0000 (0:00:01.298) 0:00:02.841 ***** 2026-01-07 01:00:16.166831 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.166836 | orchestrator | 2026-01-07 01:00:16.166842 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-07 01:00:16.166847 | orchestrator | Wednesday 07 January 2026 00:57:52 +0000 (0:00:00.662) 0:00:03.504 ***** 2026-01-07 01:00:16.166852 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.166891 | orchestrator | 2026-01-07 01:00:16.166898 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-07 01:00:16.166903 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:00.761) 0:00:04.266 ***** 2026-01-07 01:00:16.166908 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-07 01:00:16.166928 | orchestrator | ok: [testbed-manager] 2026-01-07 01:00:16.166933 | orchestrator | 2026-01-07 01:00:16.166938 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-07 01:00:16.166943 | orchestrator | Wednesday 07 January 2026 00:58:31 +0000 (0:00:38.417) 0:00:42.683 ***** 2026-01-07 01:00:16.166948 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-07 01:00:16.166954 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-07 01:00:16.166959 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-07 01:00:16.166964 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:00:16.166969 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-07 01:00:16.166975 | orchestrator | 2026-01-07 01:00:16.166982 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-07 01:00:16.166991 | orchestrator | Wednesday 07 January 2026 00:58:35 +0000 (0:00:03.995) 0:00:46.679 ***** 2026-01-07 01:00:16.166999 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-07 01:00:16.167006 | orchestrator | 2026-01-07 01:00:16.167014 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-07 01:00:16.167023 | orchestrator | Wednesday 07 January 2026 00:58:36 +0000 (0:00:00.471) 0:00:47.151 ***** 2026-01-07 01:00:16.167041 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:00:16.167051 | orchestrator | 2026-01-07 01:00:16.167059 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-07 01:00:16.167066 | orchestrator | Wednesday 07 January 2026 00:58:36 +0000 (0:00:00.129) 0:00:47.280 ***** 2026-01-07 01:00:16.167071 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:00:16.167075 | orchestrator | 2026-01-07 01:00:16.167080 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-07 01:00:16.167085 | orchestrator | Wednesday 07 January 2026 00:58:37 +0000 (0:00:00.510) 0:00:47.791 ***** 2026-01-07 01:00:16.167090 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167094 | orchestrator | 2026-01-07 01:00:16.167099 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-07 01:00:16.167104 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:01.366) 0:00:49.157 ***** 2026-01-07 01:00:16.167109 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167114 | orchestrator | 2026-01-07 01:00:16.167119 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-07 01:00:16.167123 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.797) 0:00:49.955 ***** 2026-01-07 01:00:16.167128 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167133 | orchestrator | 2026-01-07 01:00:16.167138 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-07 01:00:16.167142 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.690) 0:00:50.646 ***** 2026-01-07 01:00:16.167147 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-07 01:00:16.167152 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-07 01:00:16.167157 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:00:16.167162 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-07 01:00:16.167166 | orchestrator | 2026-01-07 01:00:16.167171 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:16.167176 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 01:00:16.167182 | orchestrator | 2026-01-07 01:00:16.167186 | orchestrator | 2026-01-07 01:00:16.167199 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:16.167204 | orchestrator | Wednesday 07 January 2026 00:58:41 +0000 (0:00:01.464) 0:00:52.111 ***** 2026-01-07 01:00:16.167209 | orchestrator | =============================================================================== 2026-01-07 01:00:16.167214 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.42s 2026-01-07 01:00:16.167223 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.00s 2026-01-07 01:00:16.167228 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.46s 2026-01-07 01:00:16.167233 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.37s 2026-01-07 01:00:16.167238 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2026-01-07 01:00:16.167242 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2026-01-07 01:00:16.167247 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-01-07 01:00:16.167252 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.76s 2026-01-07 01:00:16.167257 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.69s 2026-01-07 01:00:16.167262 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.66s 2026-01-07 01:00:16.167266 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-01-07 01:00:16.167319 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-01-07 01:00:16.167325 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.17s 2026-01-07 01:00:16.167329 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-07 01:00:16.167334 | orchestrator | 2026-01-07 01:00:16.167339 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 01:00:16.167344 | orchestrator | 2.16.14 2026-01-07 01:00:16.167349 | orchestrator | 2026-01-07 01:00:16.167354 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-07 01:00:16.167359 | orchestrator | 2026-01-07 01:00:16.167363 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-07 01:00:16.167368 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.206) 0:00:00.206 ***** 2026-01-07 01:00:16.167373 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167378 | orchestrator | 2026-01-07 01:00:16.167383 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-07 01:00:16.167387 | orchestrator | Wednesday 07 January 2026 00:58:46 +0000 (0:00:01.067) 0:00:01.273 ***** 2026-01-07 01:00:16.167392 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167397 | orchestrator | 2026-01-07 01:00:16.167402 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-07 01:00:16.167407 | orchestrator | Wednesday 07 January 2026 00:58:47 +0000 (0:00:00.908) 0:00:02.182 ***** 2026-01-07 01:00:16.167411 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167416 | orchestrator | 2026-01-07 01:00:16.167421 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-07 01:00:16.167426 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:00.899) 0:00:03.081 ***** 2026-01-07 01:00:16.167430 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167435 | orchestrator | 2026-01-07 01:00:16.167440 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-07 01:00:16.167445 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:00.982) 0:00:04.063 ***** 2026-01-07 01:00:16.167450 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167454 | orchestrator | 2026-01-07 01:00:16.167459 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-07 01:00:16.167468 | orchestrator | Wednesday 07 January 2026 00:58:50 +0000 (0:00:00.924) 0:00:04.987 ***** 2026-01-07 01:00:16.167473 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167478 | orchestrator | 2026-01-07 01:00:16.167482 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-07 01:00:16.167487 | orchestrator | Wednesday 07 January 2026 00:58:51 +0000 (0:00:00.994) 0:00:05.982 ***** 2026-01-07 01:00:16.167492 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167497 | orchestrator | 2026-01-07 01:00:16.167502 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-07 01:00:16.167510 | orchestrator | Wednesday 07 January 2026 00:58:52 +0000 (0:00:01.129) 0:00:07.112 ***** 2026-01-07 01:00:16.167515 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167520 | orchestrator | 2026-01-07 01:00:16.167525 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-07 01:00:16.167529 | orchestrator | Wednesday 07 January 2026 00:58:53 +0000 (0:00:01.038) 0:00:08.151 ***** 2026-01-07 01:00:16.167534 | orchestrator | changed: [testbed-manager] 2026-01-07 01:00:16.167540 | orchestrator | 2026-01-07 01:00:16.167545 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-07 01:00:16.167551 | orchestrator | Wednesday 07 January 2026 00:59:49 +0000 (0:00:55.756) 0:01:03.907 ***** 2026-01-07 01:00:16.167557 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:00:16.167562 | orchestrator | 2026-01-07 01:00:16.167568 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:00:16.167573 | orchestrator | 2026-01-07 01:00:16.167579 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:00:16.167585 | orchestrator | Wednesday 07 January 2026 00:59:49 +0000 (0:00:00.137) 0:01:04.045 ***** 2026-01-07 01:00:16.167591 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:16.167596 | orchestrator | 2026-01-07 01:00:16.167602 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:00:16.167607 | orchestrator | 2026-01-07 01:00:16.167613 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:00:16.167618 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:11.524) 0:01:15.570 ***** 2026-01-07 01:00:16.167624 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:16.167629 | orchestrator | 2026-01-07 01:00:16.167638 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:00:16.167644 | orchestrator | 2026-01-07 01:00:16.167650 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:00:16.167656 | orchestrator | Wednesday 07 January 2026 01:00:12 +0000 (0:00:11.305) 0:01:26.875 ***** 2026-01-07 01:00:16.167662 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:16.167667 | orchestrator | 2026-01-07 01:00:16.167673 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:16.167679 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 01:00:16.167685 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:00:16.167691 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:00:16.167696 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:00:16.167702 | orchestrator | 2026-01-07 01:00:16.167708 | orchestrator | 2026-01-07 01:00:16.167713 | orchestrator | 2026-01-07 01:00:16.167719 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:16.167725 | orchestrator | Wednesday 07 January 2026 01:00:13 +0000 (0:00:00.922) 0:01:27.798 ***** 2026-01-07 01:00:16.167731 | orchestrator | =============================================================================== 2026-01-07 01:00:16.167737 | orchestrator | Create admin user ------------------------------------------------------ 55.76s 2026-01-07 01:00:16.167742 | orchestrator | Restart ceph manager service ------------------------------------------- 23.75s 2026-01-07 01:00:16.167749 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.13s 2026-01-07 01:00:16.167758 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.07s 2026-01-07 01:00:16.167767 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2026-01-07 01:00:16.167775 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.99s 2026-01-07 01:00:16.167787 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.98s 2026-01-07 01:00:16.167795 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2026-01-07 01:00:16.167803 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2026-01-07 01:00:16.167810 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2026-01-07 01:00:16.167818 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-01-07 01:00:16.167827 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:16.167844 | orchestrator | 2026-01-07 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:19.194221 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:19.194854 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:19.195632 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:19.196336 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:19.196392 | orchestrator | 2026-01-07 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:22.221446 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:22.221978 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:22.222519 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:22.223164 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:22.223283 | orchestrator | 2026-01-07 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:25.257889 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:25.258495 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:25.259112 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:25.259947 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:25.259972 | orchestrator | 2026-01-07 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:28.289806 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:28.290285 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:28.291007 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:28.292296 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:28.292325 | orchestrator | 2026-01-07 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:31.327049 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:31.327685 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:31.328632 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:31.329784 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:31.329820 | orchestrator | 2026-01-07 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:34.365256 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:34.365985 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:34.366933 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:34.367937 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:34.368071 | orchestrator | 2026-01-07 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:37.396465 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:37.396671 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:37.397288 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:37.397800 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:37.397814 | orchestrator | 2026-01-07 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:40.423840 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:40.424913 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:40.425928 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:40.426691 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:40.426729 | orchestrator | 2026-01-07 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:43.449553 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:43.451145 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:43.452595 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:43.454072 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:43.454270 | orchestrator | 2026-01-07 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:46.487950 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:46.488011 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:46.488663 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:46.489474 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:46.489502 | orchestrator | 2026-01-07 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:49.532291 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:49.533137 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:49.534087 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:49.535088 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state STARTED 2026-01-07 01:00:49.535416 | orchestrator | 2026-01-07 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:52.572425 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:52.574109 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:52.575879 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:52.578539 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task 76320a3a-ba03-4d2e-a113-b253bb006ada is in state SUCCESS 2026-01-07 01:00:52.580416 | orchestrator | 2026-01-07 01:00:52.580467 | orchestrator | 2026-01-07 01:00:52.580476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:00:52.580484 | orchestrator | 2026-01-07 01:00:52.580490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:00:52.580497 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.193) 0:00:00.193 ***** 2026-01-07 01:00:52.580550 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:00:52.580560 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:00:52.580567 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:00:52.580573 | orchestrator | 2026-01-07 01:00:52.580580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:00:52.580587 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.273) 0:00:00.467 ***** 2026-01-07 01:00:52.580594 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-07 01:00:52.580601 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-07 01:00:52.580608 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-07 01:00:52.580614 | orchestrator | 2026-01-07 01:00:52.580620 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-07 01:00:52.580680 | orchestrator | 2026-01-07 01:00:52.580685 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:00:52.580689 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.485) 0:00:00.952 ***** 2026-01-07 01:00:52.580693 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:52.580698 | orchestrator | 2026-01-07 01:00:52.580703 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-07 01:00:52.580707 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.718) 0:00:01.670 ***** 2026-01-07 01:00:52.580711 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-07 01:00:52.580715 | orchestrator | 2026-01-07 01:00:52.580719 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-01-07 01:00:52.580723 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:04.676) 0:00:06.346 ***** 2026-01-07 01:00:52.580727 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-07 01:00:52.580731 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-07 01:00:52.580735 | orchestrator | 2026-01-07 01:00:52.580931 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-07 01:00:52.580942 | orchestrator | Wednesday 07 January 2026 00:58:56 +0000 (0:00:06.969) 0:00:13.316 ***** 2026-01-07 01:00:52.580946 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-07 01:00:52.580965 | orchestrator | 2026-01-07 01:00:52.580969 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-07 01:00:52.580973 | orchestrator | Wednesday 07 January 2026 00:58:59 +0000 (0:00:03.239) 0:00:16.555 ***** 2026-01-07 01:00:52.580977 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:00:52.580981 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-07 01:00:52.580985 | orchestrator | 2026-01-07 01:00:52.580989 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-07 01:00:52.580992 | orchestrator | Wednesday 07 January 2026 00:59:03 +0000 (0:00:03.777) 0:00:20.333 ***** 2026-01-07 01:00:52.580996 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:00:52.581000 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-07 01:00:52.581004 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-07 01:00:52.581008 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-07 01:00:52.581012 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-07 01:00:52.581015 | orchestrator | 2026-01-07 01:00:52.581020 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-01-07 01:00:52.581024 | orchestrator | Wednesday 07 January 2026 00:59:20 +0000 (0:00:16.409) 0:00:36.743 ***** 2026-01-07 01:00:52.581028 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-07 01:00:52.581031 | orchestrator | 2026-01-07 01:00:52.581035 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-07 01:00:52.581039 | orchestrator | Wednesday 07 January 2026 00:59:24 +0000 (0:00:04.148) 0:00:40.891 ***** 2026-01-07 01:00:52.581046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581139 | orchestrator | 2026-01-07 01:00:52.581143 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-07 01:00:52.581147 | orchestrator | Wednesday 07 January 2026 00:59:26 +0000 (0:00:02.127) 0:00:43.018 ***** 2026-01-07 01:00:52.581151 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-07 01:00:52.581155 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-07 01:00:52.581159 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-07 01:00:52.581163 | orchestrator | 2026-01-07 01:00:52.581166 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-07 01:00:52.581170 | orchestrator | Wednesday 07 January 2026 00:59:27 +0000 (0:00:01.376) 0:00:44.395 ***** 2026-01-07 01:00:52.581174 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581178 | orchestrator | 2026-01-07 01:00:52.581182 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-07 01:00:52.581186 | orchestrator | Wednesday 07 January 2026 00:59:27 +0000 (0:00:00.170) 0:00:44.565 ***** 2026-01-07 01:00:52.581206 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581211 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.581214 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.581218 | orchestrator | 2026-01-07 01:00:52.581222 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:00:52.581226 | orchestrator | Wednesday 07 January 2026 00:59:28 +0000 (0:00:00.402) 0:00:44.967 ***** 2026-01-07 01:00:52.581230 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:00:52.581234 | orchestrator | 2026-01-07 01:00:52.581238 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-07 01:00:52.581241 | orchestrator | Wednesday 07 January 2026 00:59:28 +0000 (0:00:00.453) 0:00:45.420 ***** 2026-01-07 01:00:52.581246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581313 | orchestrator | 2026-01-07 01:00:52.581317 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-07 01:00:52.581320 | orchestrator | Wednesday 07 January 2026 00:59:31 +0000 (0:00:02.794) 0:00:48.215 ***** 2026-01-07 01:00:52.581325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581351 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581393 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.581399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581427 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.581433 | orchestrator | 2026-01-07 01:00:52.581439 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-07 01:00:52.581446 | orchestrator | Wednesday 07 January 2026 00:59:32 +0000 (0:00:01.049) 0:00:49.264 ***** 2026-01-07 01:00:52.581456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581475 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581508 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.581517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581536 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.581540 | orchestrator | 2026-01-07 01:00:52.581544 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-07 01:00:52.581548 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:01.567) 0:00:50.831 ***** 2026-01-07 01:00:52.581557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581613 | orchestrator | 2026-01-07 01:00:52.581618 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-07 01:00:52.581622 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:04.289) 0:00:55.121 ***** 2026-01-07 01:00:52.581627 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.581634 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:52.581639 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:52.581643 | orchestrator | 2026-01-07 01:00:52.581647 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-07 01:00:52.581652 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:02.632) 0:00:57.753 ***** 2026-01-07 01:00:52.581656 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:00:52.581661 | orchestrator | 2026-01-07 01:00:52.581666 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-07 01:00:52.581670 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:01.000) 0:00:58.754 ***** 2026-01-07 01:00:52.581675 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581679 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.581683 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.581688 | orchestrator | 2026-01-07 01:00:52.581693 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-07 01:00:52.581697 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.561) 0:00:59.315 ***** 2026-01-07 01:00:52.581705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581765 | orchestrator | 2026-01-07 01:00:52.581770 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-07 01:00:52.581774 | orchestrator | Wednesday 07 January 2026 00:59:52 +0000 (0:00:09.614) 0:01:08.930 ***** 2026-01-07 01:00:52.581779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581796 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.581803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581820 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.581825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.581832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.581842 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.581846 | orchestrator | 2026-01-07 01:00:52.581851 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-01-07 01:00:52.581856 | orchestrator | Wednesday 07 January 2026 00:59:53 +0000 (0:00:00.820) 0:01:09.751 ***** 2026-01-07 01:00:52.581862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:00:52.581884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:00:52.581919 | orchestrator | 2026-01-07 01:00:52.581924 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-01-07 01:00:52.581928 | orchestrator | Wednesday 07 January 2026 00:59:57 +0000 (0:00:04.373) 0:01:14.124 ***** 2026-01-07 01:00:52.581933 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:00:52.581937 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:00:52.581942 | orchestrator | } 2026-01-07 01:00:52.581947 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:00:52.581951 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:00:52.581956 | orchestrator | } 2026-01-07 01:00:52.581961 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:00:52.581967 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:00:52.581974 | orchestrator | } 2026-01-07 01:00:52.581979 | orchestrator | 2026-01-07 01:00:52.581987 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:00:52.581997 | orchestrator | Wednesday 07 January 2026 00:59:57 +0000 (0:00:00.399) 0:01:14.523 ***** 2026-01-07 01:00:52.582007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.582061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582076 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.582088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.582096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582114 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.582124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:00:52.582133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:00:52.582146 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.582154 | orchestrator | 2026-01-07 01:00:52.582159 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:00:52.582163 | orchestrator | Wednesday 07 January 2026 00:59:58 +0000 (0:00:00.976) 0:01:15.500 ***** 2026-01-07 01:00:52.582168 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:00:52.582172 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:00:52.582177 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:00:52.582182 | orchestrator | 2026-01-07 01:00:52.582186 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-07 01:00:52.582219 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:00.971) 0:01:16.471 ***** 2026-01-07 01:00:52.582226 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582233 | orchestrator | 2026-01-07 01:00:52.582239 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-07 01:00:52.582246 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:02.522) 0:01:18.994 ***** 2026-01-07 01:00:52.582258 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582263 | orchestrator | 2026-01-07 01:00:52.582268 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-07 01:00:52.582272 | orchestrator | Wednesday 07 January 2026 01:00:05 +0000 (0:00:02.781) 0:01:21.776 ***** 2026-01-07 01:00:52.582276 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582281 | orchestrator | 2026-01-07 01:00:52.582286 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:00:52.582290 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:10.489) 0:01:32.265 ***** 2026-01-07 01:00:52.582294 | orchestrator | 2026-01-07 01:00:52.582299 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:00:52.582303 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:00.106) 0:01:32.371 ***** 2026-01-07 01:00:52.582308 | orchestrator | 2026-01-07 01:00:52.582312 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:00:52.582317 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:00.060) 0:01:32.431 ***** 2026-01-07 01:00:52.582321 | orchestrator | 2026-01-07 01:00:52.582325 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-07 01:00:52.582330 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:00.066) 0:01:32.497 ***** 2026-01-07 01:00:52.582334 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582339 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:52.582343 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:52.582347 | orchestrator | 2026-01-07 01:00:52.582352 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-07 01:00:52.582356 | orchestrator | Wednesday 07 January 2026 01:00:26 +0000 (0:00:10.388) 0:01:42.886 ***** 2026-01-07 01:00:52.582361 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:52.582365 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582372 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:52.582378 | orchestrator | 2026-01-07 01:00:52.582387 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-07 01:00:52.582394 | orchestrator | Wednesday 07 January 2026 01:00:36 +0000 (0:00:10.296) 0:01:53.182 ***** 2026-01-07 01:00:52.582402 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:00:52.582425 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:00:52.582432 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:00:52.582437 | orchestrator | 2026-01-07 01:00:52.582443 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:52.582451 | orchestrator | testbed-node-0 : ok=25  changed=20  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:00:52.582458 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:00:52.582465 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:00:52.582471 | orchestrator | 2026-01-07 01:00:52.582478 | orchestrator | 2026-01-07 01:00:52.582485 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:52.582491 | orchestrator | Wednesday 07 January 2026 01:00:48 +0000 (0:00:12.175) 0:02:05.357 ***** 2026-01-07 01:00:52.582497 | orchestrator | =============================================================================== 2026-01-07 01:00:52.582504 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.41s 2026-01-07 01:00:52.582510 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.18s 2026-01-07 01:00:52.582514 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.49s 2026-01-07 01:00:52.582519 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.39s 2026-01-07 01:00:52.582524 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.30s 2026-01-07 01:00:52.582533 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.62s 2026-01-07 01:00:52.582538 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.97s 2026-01-07 01:00:52.582542 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 4.68s 2026-01-07 01:00:52.582547 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.37s 2026-01-07 01:00:52.582551 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.29s 2026-01-07 01:00:52.582556 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.15s 2026-01-07 01:00:52.582560 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.78s 2026-01-07 01:00:52.582565 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.24s 2026-01-07 01:00:52.582569 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 2.79s 2026-01-07 01:00:52.582574 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.78s 2026-01-07 01:00:52.582578 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.63s 2026-01-07 01:00:52.582583 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.52s 2026-01-07 01:00:52.582589 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.13s 2026-01-07 01:00:52.582597 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.57s 2026-01-07 01:00:52.582612 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.38s 2026-01-07 01:00:52.582619 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:00:52.582625 | orchestrator | 2026-01-07 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:55.618338 | orchestrator | 2026-01-07 01:00:55 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:55.619281 | orchestrator | 2026-01-07 01:00:55 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:55.620347 | orchestrator | 2026-01-07 01:00:55 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:55.621294 | orchestrator | 2026-01-07 01:00:55 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:00:55.621347 | orchestrator | 2026-01-07 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:58.663210 | orchestrator | 2026-01-07 01:00:58 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:00:58.664915 | orchestrator | 2026-01-07 01:00:58 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:00:58.667655 | orchestrator | 2026-01-07 01:00:58 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:00:58.669356 | orchestrator | 2026-01-07 01:00:58 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:00:58.669493 | orchestrator | 2026-01-07 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:01.705576 | orchestrator | 2026-01-07 01:01:01 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:01.706505 | orchestrator | 2026-01-07 01:01:01 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:01.708655 | orchestrator | 2026-01-07 01:01:01 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:01.710683 | orchestrator | 2026-01-07 01:01:01 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:01.710728 | orchestrator | 2026-01-07 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:04.748018 | orchestrator | 2026-01-07 01:01:04 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:04.749813 | orchestrator | 2026-01-07 01:01:04 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:04.751847 | orchestrator | 2026-01-07 01:01:04 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:04.753221 | orchestrator | 2026-01-07 01:01:04 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:04.753276 | orchestrator | 2026-01-07 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:07.795476 | orchestrator | 2026-01-07 01:01:07 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:07.797123 | orchestrator | 2026-01-07 01:01:07 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:07.798770 | orchestrator | 2026-01-07 01:01:07 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:07.800182 | orchestrator | 2026-01-07 01:01:07 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:07.800216 | orchestrator | 2026-01-07 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:10.846881 | orchestrator | 2026-01-07 01:01:10 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:10.849025 | orchestrator | 2026-01-07 01:01:10 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:10.851008 | orchestrator | 2026-01-07 01:01:10 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:10.853108 | orchestrator | 2026-01-07 01:01:10 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:10.853244 | orchestrator | 2026-01-07 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:13.897392 | orchestrator | 2026-01-07 01:01:13 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:13.899791 | orchestrator | 2026-01-07 01:01:13 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:13.902360 | orchestrator | 2026-01-07 01:01:13 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:13.904721 | orchestrator | 2026-01-07 01:01:13 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:13.904760 | orchestrator | 2026-01-07 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:16.952214 | orchestrator | 2026-01-07 01:01:16 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:16.952966 | orchestrator | 2026-01-07 01:01:16 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:16.954117 | orchestrator | 2026-01-07 01:01:16 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:16.955271 | orchestrator | 2026-01-07 01:01:16 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:16.955306 | orchestrator | 2026-01-07 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:20.007837 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:20.010190 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:20.011811 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:20.013681 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:20.013753 | orchestrator | 2026-01-07 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:23.046657 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:23.047481 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:23.049264 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:23.050276 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:23.050311 | orchestrator | 2026-01-07 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:26.099051 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:26.101324 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:26.103557 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:26.104904 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:26.104959 | orchestrator | 2026-01-07 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:29.145694 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:29.148803 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:29.150439 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:29.152252 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:29.152293 | orchestrator | 2026-01-07 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:32.186753 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:32.187579 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:32.188515 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:32.189655 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:32.189838 | orchestrator | 2026-01-07 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:35.238452 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:35.240118 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:35.242114 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:35.243655 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:35.243692 | orchestrator | 2026-01-07 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:38.291793 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:38.291904 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:38.293810 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:38.295637 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:38.295692 | orchestrator | 2026-01-07 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:41.341189 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state STARTED 2026-01-07 01:01:41.343061 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:41.345551 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:41.346806 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:41.347152 | orchestrator | 2026-01-07 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:44.381887 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task d72c4574-0af4-4557-ad31-f7fcfdaaa8b4 is in state SUCCESS 2026-01-07 01:01:44.383188 | orchestrator | 2026-01-07 01:01:44.383249 | orchestrator | 2026-01-07 01:01:44.383258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:01:44.383266 | orchestrator | 2026-01-07 01:01:44.383272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:01:44.383279 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.304) 0:00:00.304 ***** 2026-01-07 01:01:44.383303 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:44.383312 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:44.383319 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:44.383325 | orchestrator | 2026-01-07 01:01:44.383331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:01:44.383337 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.309) 0:00:00.614 ***** 2026-01-07 01:01:44.383344 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-07 01:01:44.383351 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-07 01:01:44.383358 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-07 01:01:44.383364 | orchestrator | 2026-01-07 01:01:44.383370 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-07 01:01:44.383376 | orchestrator | 2026-01-07 01:01:44.383382 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:01:44.383388 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.389) 0:00:01.004 ***** 2026-01-07 01:01:44.383395 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:01:44.383403 | orchestrator | 2026-01-07 01:01:44.383409 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-07 01:01:44.383416 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.582) 0:00:01.586 ***** 2026-01-07 01:01:44.383806 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-07 01:01:44.383825 | orchestrator | 2026-01-07 01:01:44.383831 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-01-07 01:01:44.383838 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:04.713) 0:00:06.300 ***** 2026-01-07 01:01:44.383844 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-07 01:01:44.383852 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-07 01:01:44.383858 | orchestrator | 2026-01-07 01:01:44.383864 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-07 01:01:44.383870 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:00:07.268) 0:00:13.568 ***** 2026-01-07 01:01:44.383901 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:01:44.383910 | orchestrator | 2026-01-07 01:01:44.383916 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-07 01:01:44.383922 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:03.380) 0:00:16.948 ***** 2026-01-07 01:01:44.383928 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:01:44.383934 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-07 01:01:44.383939 | orchestrator | 2026-01-07 01:01:44.383945 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-07 01:01:44.383950 | orchestrator | Wednesday 07 January 2026 00:59:04 +0000 (0:00:03.792) 0:00:20.741 ***** 2026-01-07 01:01:44.383956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:01:44.383963 | orchestrator | 2026-01-07 01:01:44.383969 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-01-07 01:01:44.383975 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:03.378) 0:00:24.119 ***** 2026-01-07 01:01:44.383981 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-07 01:01:44.383987 | orchestrator | 2026-01-07 01:01:44.383993 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-07 01:01:44.383999 | orchestrator | Wednesday 07 January 2026 00:59:11 +0000 (0:00:03.941) 0:00:28.061 ***** 2026-01-07 01:01:44.384009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.384041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.384049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.384064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.384913 | orchestrator | 2026-01-07 01:01:44.384917 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-07 01:01:44.384922 | orchestrator | Wednesday 07 January 2026 00:59:14 +0000 (0:00:02.955) 0:00:31.016 ***** 2026-01-07 01:01:44.384926 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.384930 | orchestrator | 2026-01-07 01:01:44.384934 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-07 01:01:44.384937 | orchestrator | Wednesday 07 January 2026 00:59:14 +0000 (0:00:00.127) 0:00:31.144 ***** 2026-01-07 01:01:44.384941 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.384945 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.384948 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.384952 | orchestrator | 2026-01-07 01:01:44.384956 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:01:44.384960 | orchestrator | Wednesday 07 January 2026 00:59:15 +0000 (0:00:00.292) 0:00:31.437 ***** 2026-01-07 01:01:44.384964 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:01:44.384968 | orchestrator | 2026-01-07 01:01:44.384972 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-07 01:01:44.384975 | orchestrator | Wednesday 07 January 2026 00:59:15 +0000 (0:00:00.690) 0:00:32.128 ***** 2026-01-07 01:01:44.385005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385177 | orchestrator | 2026-01-07 01:01:44.385181 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-07 01:01:44.385185 | orchestrator | Wednesday 07 January 2026 00:59:21 +0000 (0:00:05.903) 0:00:38.032 ***** 2026-01-07 01:01:44.385208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385274 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.385278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385315 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.385319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385329 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.385333 | orchestrator | 2026-01-07 01:01:44.385337 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-07 01:01:44.385341 | orchestrator | Wednesday 07 January 2026 00:59:23 +0000 (0:00:02.174) 0:00:40.206 ***** 2026-01-07 01:01:44.385361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385503 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.385510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.385545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385551 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.385558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.385564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.385589 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.385593 | orchestrator | 2026-01-07 01:01:44.385596 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-07 01:01:44.385614 | orchestrator | Wednesday 07 January 2026 00:59:27 +0000 (0:00:03.471) 0:00:43.678 ***** 2026-01-07 01:01:44.385626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385772 | orchestrator | 2026-01-07 01:01:44.385776 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-07 01:01:44.385792 | orchestrator | Wednesday 07 January 2026 00:59:32 +0000 (0:00:05.212) 0:00:48.890 ***** 2026-01-07 01:01:44.385800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.385817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.385929 | orchestrator | 2026-01-07 01:01:44.385952 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-07 01:01:44.385958 | orchestrator | Wednesday 07 January 2026 00:59:53 +0000 (0:00:20.670) 0:01:09.560 ***** 2026-01-07 01:01:44.385965 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:01:44.385974 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:01:44.385980 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:01:44.385986 | orchestrator | 2026-01-07 01:01:44.385992 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-07 01:01:44.385998 | orchestrator | Wednesday 07 January 2026 00:59:58 +0000 (0:00:05.262) 0:01:14.823 ***** 2026-01-07 01:01:44.386004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:01:44.386010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:01:44.386053 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:01:44.386060 | orchestrator | 2026-01-07 01:01:44.386072 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-07 01:01:44.386092 | orchestrator | Wednesday 07 January 2026 01:00:01 +0000 (0:00:03.158) 0:01:17.982 ***** 2026-01-07 01:01:44.386099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386284 | orchestrator | 2026-01-07 01:01:44.386291 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-07 01:01:44.386297 | orchestrator | Wednesday 07 January 2026 01:00:04 +0000 (0:00:03.408) 0:01:21.391 ***** 2026-01-07 01:01:44.386301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386450 | orchestrator | 2026-01-07 01:01:44.386456 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:01:44.386463 | orchestrator | Wednesday 07 January 2026 01:00:07 +0000 (0:00:02.619) 0:01:24.010 ***** 2026-01-07 01:01:44.386469 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.386475 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.386482 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.386488 | orchestrator | 2026-01-07 01:01:44.386494 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-07 01:01:44.386500 | orchestrator | Wednesday 07 January 2026 01:00:08 +0000 (0:00:00.767) 0:01:24.778 ***** 2026-01-07 01:01:44.386507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.386520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386561 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.386568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.386581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386621 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.386628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.386646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386681 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.386688 | orchestrator | 2026-01-07 01:01:44.386695 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-01-07 01:01:44.386702 | orchestrator | Wednesday 07 January 2026 01:00:09 +0000 (0:00:01.174) 0:01:25.953 ***** 2026-01-07 01:01:44.386708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.386715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.386741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:01:44.386748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:01:44.386835 | orchestrator | 2026-01-07 01:01:44.386839 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-01-07 01:01:44.386843 | orchestrator | Wednesday 07 January 2026 01:00:13 +0000 (0:00:04.234) 0:01:30.188 ***** 2026-01-07 01:01:44.386847 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:01:44.386851 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:01:44.386855 | orchestrator | } 2026-01-07 01:01:44.386859 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:01:44.386864 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:01:44.386867 | orchestrator | } 2026-01-07 01:01:44.386871 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:01:44.386879 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:01:44.386883 | orchestrator | } 2026-01-07 01:01:44.386887 | orchestrator | 2026-01-07 01:01:44.386891 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:01:44.386895 | orchestrator | Wednesday 07 January 2026 01:00:14 +0000 (0:00:00.258) 0:01:30.447 ***** 2026-01-07 01:01:44.386899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.386915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.386957 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.386962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.386967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.386997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387022 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.387026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:01:44.387031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:01:44.387043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:01:44.387065 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.387068 | orchestrator | 2026-01-07 01:01:44.387072 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:01:44.387076 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:01.670) 0:01:32.118 ***** 2026-01-07 01:01:44.387097 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:44.387101 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:44.387105 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:44.387109 | orchestrator | 2026-01-07 01:01:44.387113 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-07 01:01:44.387117 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:00.315) 0:01:32.433 ***** 2026-01-07 01:01:44.387121 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-07 01:01:44.387125 | orchestrator | 2026-01-07 01:01:44.387129 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-07 01:01:44.387134 | orchestrator | Wednesday 07 January 2026 01:00:18 +0000 (0:00:02.038) 0:01:34.472 ***** 2026-01-07 01:01:44.387141 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:01:44.387147 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-07 01:01:44.387156 | orchestrator | 2026-01-07 01:01:44.387165 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-07 01:01:44.387171 | orchestrator | Wednesday 07 January 2026 01:00:20 +0000 (0:00:02.064) 0:01:36.536 ***** 2026-01-07 01:01:44.387177 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387183 | orchestrator | 2026-01-07 01:01:44.387189 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:01:44.387195 | orchestrator | Wednesday 07 January 2026 01:00:34 +0000 (0:00:13.910) 0:01:50.447 ***** 2026-01-07 01:01:44.387201 | orchestrator | 2026-01-07 01:01:44.387207 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:01:44.387212 | orchestrator | Wednesday 07 January 2026 01:00:34 +0000 (0:00:00.140) 0:01:50.588 ***** 2026-01-07 01:01:44.387219 | orchestrator | 2026-01-07 01:01:44.387225 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:01:44.387231 | orchestrator | Wednesday 07 January 2026 01:00:34 +0000 (0:00:00.126) 0:01:50.714 ***** 2026-01-07 01:01:44.387237 | orchestrator | 2026-01-07 01:01:44.387243 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-07 01:01:44.387249 | orchestrator | Wednesday 07 January 2026 01:00:34 +0000 (0:00:00.068) 0:01:50.782 ***** 2026-01-07 01:01:44.387254 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387261 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387267 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387272 | orchestrator | 2026-01-07 01:01:44.387285 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-07 01:01:44.387292 | orchestrator | Wednesday 07 January 2026 01:00:48 +0000 (0:00:14.136) 0:02:04.919 ***** 2026-01-07 01:01:44.387299 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387306 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387317 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387321 | orchestrator | 2026-01-07 01:01:44.387329 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-07 01:01:44.387333 | orchestrator | Wednesday 07 January 2026 01:01:00 +0000 (0:00:12.037) 0:02:16.957 ***** 2026-01-07 01:01:44.387338 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387341 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387345 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387349 | orchestrator | 2026-01-07 01:01:44.387353 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-07 01:01:44.387357 | orchestrator | Wednesday 07 January 2026 01:01:10 +0000 (0:00:09.967) 0:02:26.924 ***** 2026-01-07 01:01:44.387361 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387365 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387368 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387372 | orchestrator | 2026-01-07 01:01:44.387377 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-07 01:01:44.387382 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:10.165) 0:02:37.090 ***** 2026-01-07 01:01:44.387388 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387393 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387396 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387400 | orchestrator | 2026-01-07 01:01:44.387405 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-07 01:01:44.387408 | orchestrator | Wednesday 07 January 2026 01:01:29 +0000 (0:00:09.036) 0:02:46.126 ***** 2026-01-07 01:01:44.387412 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387417 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:44.387420 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:44.387424 | orchestrator | 2026-01-07 01:01:44.387428 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-07 01:01:44.387432 | orchestrator | Wednesday 07 January 2026 01:01:35 +0000 (0:00:05.709) 0:02:51.836 ***** 2026-01-07 01:01:44.387436 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:44.387440 | orchestrator | 2026-01-07 01:01:44.387446 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:01:44.387455 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:01:44.387463 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:01:44.387471 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:01:44.387479 | orchestrator | 2026-01-07 01:01:44.387484 | orchestrator | 2026-01-07 01:01:44.387489 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:01:44.387496 | orchestrator | Wednesday 07 January 2026 01:01:43 +0000 (0:00:07.946) 0:02:59.782 ***** 2026-01-07 01:01:44.387503 | orchestrator | =============================================================================== 2026-01-07 01:01:44.387508 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.67s 2026-01-07 01:01:44.387514 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.14s 2026-01-07 01:01:44.387520 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.91s 2026-01-07 01:01:44.387525 | orchestrator | designate : Restart designate-api container ---------------------------- 12.04s 2026-01-07 01:01:44.387532 | orchestrator | designate : Restart designate-producer container ----------------------- 10.17s 2026-01-07 01:01:44.387537 | orchestrator | designate : Restart designate-central container ------------------------- 9.97s 2026-01-07 01:01:44.387543 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.04s 2026-01-07 01:01:44.387549 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.95s 2026-01-07 01:01:44.387565 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.27s 2026-01-07 01:01:44.387572 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.90s 2026-01-07 01:01:44.387579 | orchestrator | designate : Restart designate-worker container -------------------------- 5.71s 2026-01-07 01:01:44.387585 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.26s 2026-01-07 01:01:44.387592 | orchestrator | designate : Copying over config.json files for services ----------------- 5.21s 2026-01-07 01:01:44.387598 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.71s 2026-01-07 01:01:44.387605 | orchestrator | service-check-containers : designate | Check containers ----------------- 4.23s 2026-01-07 01:01:44.387611 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 3.94s 2026-01-07 01:01:44.387617 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.79s 2026-01-07 01:01:44.387623 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 3.47s 2026-01-07 01:01:44.387630 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.41s 2026-01-07 01:01:44.387637 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.38s 2026-01-07 01:01:44.387644 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:44.387658 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:44.387665 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:44.387676 | orchestrator | 2026-01-07 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:47.429869 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:47.431461 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:47.433885 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:47.435378 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:01:47.435565 | orchestrator | 2026-01-07 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:50.476209 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:50.477431 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:50.479225 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:50.480506 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:01:50.480545 | orchestrator | 2026-01-07 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:53.530189 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:53.530238 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:53.531864 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:53.533846 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:01:53.533893 | orchestrator | 2026-01-07 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:56.579744 | orchestrator | 2026-01-07 01:01:56 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:56.581221 | orchestrator | 2026-01-07 01:01:56 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:56.582435 | orchestrator | 2026-01-07 01:01:56 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:56.584769 | orchestrator | 2026-01-07 01:01:56 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:01:56.584804 | orchestrator | 2026-01-07 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:59.626944 | orchestrator | 2026-01-07 01:01:59 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:01:59.628904 | orchestrator | 2026-01-07 01:01:59 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:01:59.630448 | orchestrator | 2026-01-07 01:01:59 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:01:59.631876 | orchestrator | 2026-01-07 01:01:59 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:01:59.631904 | orchestrator | 2026-01-07 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:02.675002 | orchestrator | 2026-01-07 01:02:02 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:02.675387 | orchestrator | 2026-01-07 01:02:02 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:02.676621 | orchestrator | 2026-01-07 01:02:02 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state STARTED 2026-01-07 01:02:02.677676 | orchestrator | 2026-01-07 01:02:02 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:02.677702 | orchestrator | 2026-01-07 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:05.730178 | orchestrator | 2026-01-07 01:02:05 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:05.731940 | orchestrator | 2026-01-07 01:02:05 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:05.735028 | orchestrator | 2026-01-07 01:02:05 | INFO  | Task 511892d0-29a4-4eba-a6fc-48f9b5a315da is in state SUCCESS 2026-01-07 01:02:05.736609 | orchestrator | 2026-01-07 01:02:05.736643 | orchestrator | 2026-01-07 01:02:05.736652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:02:05.736662 | orchestrator | 2026-01-07 01:02:05.736668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:02:05.736674 | orchestrator | Wednesday 07 January 2026 01:00:53 +0000 (0:00:00.191) 0:00:00.191 ***** 2026-01-07 01:02:05.736691 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:05.736698 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:05.736705 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:05.736711 | orchestrator | 2026-01-07 01:02:05.736718 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:02:05.736724 | orchestrator | Wednesday 07 January 2026 01:00:54 +0000 (0:00:00.245) 0:00:00.437 ***** 2026-01-07 01:02:05.736731 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-07 01:02:05.736738 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-07 01:02:05.736744 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-07 01:02:05.736751 | orchestrator | 2026-01-07 01:02:05.736757 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-07 01:02:05.736764 | orchestrator | 2026-01-07 01:02:05.736771 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:02:05.736777 | orchestrator | Wednesday 07 January 2026 01:00:54 +0000 (0:00:00.363) 0:00:00.800 ***** 2026-01-07 01:02:05.736798 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:05.736804 | orchestrator | 2026-01-07 01:02:05.736807 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-07 01:02:05.736811 | orchestrator | Wednesday 07 January 2026 01:00:55 +0000 (0:00:00.501) 0:00:01.302 ***** 2026-01-07 01:02:05.736815 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-07 01:02:05.736820 | orchestrator | 2026-01-07 01:02:05.736826 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-01-07 01:02:05.736835 | orchestrator | Wednesday 07 January 2026 01:00:58 +0000 (0:00:03.644) 0:00:04.947 ***** 2026-01-07 01:02:05.736842 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-07 01:02:05.736848 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-07 01:02:05.736857 | orchestrator | 2026-01-07 01:02:05.736863 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-07 01:02:05.736868 | orchestrator | Wednesday 07 January 2026 01:01:05 +0000 (0:00:06.513) 0:00:11.460 ***** 2026-01-07 01:02:05.736873 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:02:05.736879 | orchestrator | 2026-01-07 01:02:05.736886 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-07 01:02:05.736891 | orchestrator | Wednesday 07 January 2026 01:01:09 +0000 (0:00:03.789) 0:00:15.250 ***** 2026-01-07 01:02:05.736897 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:02:05.736903 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-07 01:02:05.736909 | orchestrator | 2026-01-07 01:02:05.736914 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-07 01:02:05.736920 | orchestrator | Wednesday 07 January 2026 01:01:13 +0000 (0:00:04.669) 0:00:19.919 ***** 2026-01-07 01:02:05.736927 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:02:05.736933 | orchestrator | 2026-01-07 01:02:05.736938 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-01-07 01:02:05.736947 | orchestrator | Wednesday 07 January 2026 01:01:17 +0000 (0:00:03.539) 0:00:23.459 ***** 2026-01-07 01:02:05.736953 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-07 01:02:05.736959 | orchestrator | 2026-01-07 01:02:05.736965 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:02:05.736971 | orchestrator | Wednesday 07 January 2026 01:01:21 +0000 (0:00:03.939) 0:00:27.398 ***** 2026-01-07 01:02:05.736977 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.736982 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.736988 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.736993 | orchestrator | 2026-01-07 01:02:05.736999 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-07 01:02:05.737005 | orchestrator | Wednesday 07 January 2026 01:01:21 +0000 (0:00:00.324) 0:00:27.723 ***** 2026-01-07 01:02:05.737024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737069 | orchestrator | 2026-01-07 01:02:05.737073 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-07 01:02:05.737077 | orchestrator | Wednesday 07 January 2026 01:01:22 +0000 (0:00:01.018) 0:00:28.741 ***** 2026-01-07 01:02:05.737082 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737088 | orchestrator | 2026-01-07 01:02:05.737097 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-07 01:02:05.737104 | orchestrator | Wednesday 07 January 2026 01:01:22 +0000 (0:00:00.127) 0:00:28.869 ***** 2026-01-07 01:02:05.737110 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737116 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737123 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737129 | orchestrator | 2026-01-07 01:02:05.737134 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:02:05.737142 | orchestrator | Wednesday 07 January 2026 01:01:23 +0000 (0:00:00.509) 0:00:29.378 ***** 2026-01-07 01:02:05.737146 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:05.737150 | orchestrator | 2026-01-07 01:02:05.737154 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-07 01:02:05.737158 | orchestrator | Wednesday 07 January 2026 01:01:23 +0000 (0:00:00.488) 0:00:29.867 ***** 2026-01-07 01:02:05.737162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737188 | orchestrator | 2026-01-07 01:02:05.737192 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-07 01:02:05.737195 | orchestrator | Wednesday 07 January 2026 01:01:25 +0000 (0:00:01.586) 0:00:31.453 ***** 2026-01-07 01:02:05.737200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737206 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737219 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737231 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737236 | orchestrator | 2026-01-07 01:02:05.737241 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-07 01:02:05.737245 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:00.966) 0:00:32.420 ***** 2026-01-07 01:02:05.737252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737262 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737280 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737304 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737308 | orchestrator | 2026-01-07 01:02:05.737313 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-07 01:02:05.737318 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:00.689) 0:00:33.109 ***** 2026-01-07 01:02:05.737323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737345 | orchestrator | 2026-01-07 01:02:05.737352 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-07 01:02:05.737357 | orchestrator | Wednesday 07 January 2026 01:01:28 +0000 (0:00:01.517) 0:00:34.627 ***** 2026-01-07 01:02:05.737369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737392 | orchestrator | 2026-01-07 01:02:05.737398 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-07 01:02:05.737405 | orchestrator | Wednesday 07 January 2026 01:01:30 +0000 (0:00:02.501) 0:00:37.129 ***** 2026-01-07 01:02:05.737411 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-07 01:02:05.737416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737422 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-07 01:02:05.737429 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737435 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-07 01:02:05.737441 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737446 | orchestrator | 2026-01-07 01:02:05.737452 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-01-07 01:02:05.737459 | orchestrator | Wednesday 07 January 2026 01:01:31 +0000 (0:00:00.676) 0:00:37.805 ***** 2026-01-07 01:02:05.737465 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:05.737470 | orchestrator | 2026-01-07 01:02:05.737476 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-01-07 01:02:05.737486 | orchestrator | Wednesday 07 January 2026 01:01:32 +0000 (0:00:00.903) 0:00:38.709 ***** 2026-01-07 01:02:05.737492 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.737498 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:05.737574 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:05.737579 | orchestrator | 2026-01-07 01:02:05.737585 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-07 01:02:05.737595 | orchestrator | Wednesday 07 January 2026 01:01:34 +0000 (0:00:02.119) 0:00:40.828 ***** 2026-01-07 01:02:05.737602 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.737607 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:05.737613 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:05.737618 | orchestrator | 2026-01-07 01:02:05.737625 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-07 01:02:05.737631 | orchestrator | Wednesday 07 January 2026 01:01:35 +0000 (0:00:01.265) 0:00:42.094 ***** 2026-01-07 01:02:05.737637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737653 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737667 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737680 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737686 | orchestrator | 2026-01-07 01:02:05.737692 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-01-07 01:02:05.737699 | orchestrator | Wednesday 07 January 2026 01:01:36 +0000 (0:00:00.485) 0:00:42.579 ***** 2026-01-07 01:02:05.737713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-07 01:02:05.737741 | orchestrator | 2026-01-07 01:02:05.737748 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-01-07 01:02:05.737754 | orchestrator | Wednesday 07 January 2026 01:01:37 +0000 (0:00:01.239) 0:00:43.818 ***** 2026-01-07 01:02:05.737760 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:02:05.737767 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:05.737774 | orchestrator | } 2026-01-07 01:02:05.737780 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:02:05.737787 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:05.737793 | orchestrator | } 2026-01-07 01:02:05.737799 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:02:05.737805 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:05.737811 | orchestrator | } 2026-01-07 01:02:05.737817 | orchestrator | 2026-01-07 01:02:05.737824 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:02:05.737830 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.601) 0:00:44.419 ***** 2026-01-07 01:02:05.737846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737854 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:05.737861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737873 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:05.737880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-07 01:02:05.737886 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:05.737893 | orchestrator | 2026-01-07 01:02:05.737899 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-07 01:02:05.737906 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.684) 0:00:45.104 ***** 2026-01-07 01:02:05.737912 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.737918 | orchestrator | 2026-01-07 01:02:05.737924 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-07 01:02:05.737931 | orchestrator | Wednesday 07 January 2026 01:01:41 +0000 (0:00:02.607) 0:00:47.712 ***** 2026-01-07 01:02:05.737937 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.737943 | orchestrator | 2026-01-07 01:02:05.737950 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-07 01:02:05.737956 | orchestrator | Wednesday 07 January 2026 01:01:43 +0000 (0:00:02.445) 0:00:50.157 ***** 2026-01-07 01:02:05.737963 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.737969 | orchestrator | 2026-01-07 01:02:05.737976 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:02:05.737982 | orchestrator | Wednesday 07 January 2026 01:01:57 +0000 (0:00:13.883) 0:01:04.040 ***** 2026-01-07 01:02:05.737988 | orchestrator | 2026-01-07 01:02:05.737995 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:02:05.738001 | orchestrator | Wednesday 07 January 2026 01:01:57 +0000 (0:00:00.059) 0:01:04.100 ***** 2026-01-07 01:02:05.738007 | orchestrator | 2026-01-07 01:02:05.738085 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:02:05.738097 | orchestrator | Wednesday 07 January 2026 01:01:58 +0000 (0:00:00.286) 0:01:04.386 ***** 2026-01-07 01:02:05.738104 | orchestrator | 2026-01-07 01:02:05.738110 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-07 01:02:05.738117 | orchestrator | Wednesday 07 January 2026 01:01:58 +0000 (0:00:00.065) 0:01:04.451 ***** 2026-01-07 01:02:05.738124 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:05.738135 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:05.738151 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:05.738155 | orchestrator | 2026-01-07 01:02:05.738163 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:05.738168 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:02:05.738175 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:02:05.738179 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:02:05.738183 | orchestrator | 2026-01-07 01:02:05.738187 | orchestrator | 2026-01-07 01:02:05.738191 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:05.738194 | orchestrator | Wednesday 07 January 2026 01:02:02 +0000 (0:00:04.602) 0:01:09.054 ***** 2026-01-07 01:02:05.738198 | orchestrator | =============================================================================== 2026-01-07 01:02:05.738203 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.88s 2026-01-07 01:02:05.738208 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.51s 2026-01-07 01:02:05.738212 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.67s 2026-01-07 01:02:05.738217 | orchestrator | placement : Restart placement-api container ----------------------------- 4.60s 2026-01-07 01:02:05.738221 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.94s 2026-01-07 01:02:05.738225 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.79s 2026-01-07 01:02:05.738230 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.64s 2026-01-07 01:02:05.738234 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.54s 2026-01-07 01:02:05.738239 | orchestrator | placement : Creating placement databases -------------------------------- 2.61s 2026-01-07 01:02:05.738243 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.50s 2026-01-07 01:02:05.738247 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2026-01-07 01:02:05.738252 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.12s 2026-01-07 01:02:05.738256 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.59s 2026-01-07 01:02:05.738261 | orchestrator | placement : Copying over config.json files for services ----------------- 1.52s 2026-01-07 01:02:05.738265 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.27s 2026-01-07 01:02:05.738270 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.24s 2026-01-07 01:02:05.738274 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.02s 2026-01-07 01:02:05.738279 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.97s 2026-01-07 01:02:05.738284 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 0.90s 2026-01-07 01:02:05.738289 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.69s 2026-01-07 01:02:05.738294 | orchestrator | 2026-01-07 01:02:05 | INFO  | Task 39dfc304-c78a-46db-b953-5b1ca403c3a9 is in state STARTED 2026-01-07 01:02:05.739146 | orchestrator | 2026-01-07 01:02:05 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:05.739176 | orchestrator | 2026-01-07 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:08.770659 | orchestrator | 2026-01-07 01:02:08 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:08.772698 | orchestrator | 2026-01-07 01:02:08 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:08.773515 | orchestrator | 2026-01-07 01:02:08 | INFO  | Task 39dfc304-c78a-46db-b953-5b1ca403c3a9 is in state STARTED 2026-01-07 01:02:08.774511 | orchestrator | 2026-01-07 01:02:08 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:08.774537 | orchestrator | 2026-01-07 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:11.806565 | orchestrator | 2026-01-07 01:02:11 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:11.808722 | orchestrator | 2026-01-07 01:02:11 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:11.810842 | orchestrator | 2026-01-07 01:02:11 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:11.812823 | orchestrator | 2026-01-07 01:02:11 | INFO  | Task 39dfc304-c78a-46db-b953-5b1ca403c3a9 is in state SUCCESS 2026-01-07 01:02:11.814698 | orchestrator | 2026-01-07 01:02:11 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:11.814874 | orchestrator | 2026-01-07 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:14.859757 | orchestrator | 2026-01-07 01:02:14 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:14.860716 | orchestrator | 2026-01-07 01:02:14 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:14.862945 | orchestrator | 2026-01-07 01:02:14 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:14.864689 | orchestrator | 2026-01-07 01:02:14 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:14.864756 | orchestrator | 2026-01-07 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:17.903957 | orchestrator | 2026-01-07 01:02:17 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:17.904863 | orchestrator | 2026-01-07 01:02:17 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:17.906072 | orchestrator | 2026-01-07 01:02:17 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:17.907331 | orchestrator | 2026-01-07 01:02:17 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:17.907485 | orchestrator | 2026-01-07 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:20.942674 | orchestrator | 2026-01-07 01:02:20 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:20.944251 | orchestrator | 2026-01-07 01:02:20 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:20.945672 | orchestrator | 2026-01-07 01:02:20 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:20.946888 | orchestrator | 2026-01-07 01:02:20 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:20.946919 | orchestrator | 2026-01-07 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:23.977349 | orchestrator | 2026-01-07 01:02:23 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:23.977417 | orchestrator | 2026-01-07 01:02:23 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:23.979917 | orchestrator | 2026-01-07 01:02:23 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:23.980718 | orchestrator | 2026-01-07 01:02:23 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:23.980770 | orchestrator | 2026-01-07 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:27.038132 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:27.040681 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:27.042693 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:27.044908 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:27.045110 | orchestrator | 2026-01-07 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:30.082601 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:30.085527 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:30.087572 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:30.089492 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:30.089557 | orchestrator | 2026-01-07 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:33.125419 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:33.128433 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:33.129793 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:33.131144 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:33.131238 | orchestrator | 2026-01-07 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:36.162049 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:36.164726 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:36.167047 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:36.168848 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:36.168908 | orchestrator | 2026-01-07 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:39.213762 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:39.214162 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:39.215123 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:39.216197 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:39.216347 | orchestrator | 2026-01-07 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:42.244256 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:42.244549 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:42.245322 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:42.245939 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:42.246064 | orchestrator | 2026-01-07 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:45.276620 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:45.277484 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:45.278203 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:45.278830 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:45.278863 | orchestrator | 2026-01-07 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:48.317356 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:48.319501 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:48.321550 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:48.323713 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:48.324218 | orchestrator | 2026-01-07 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:51.377193 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:51.379236 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:51.380848 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:51.382963 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:51.383143 | orchestrator | 2026-01-07 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:54.433393 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:54.435717 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:54.436712 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state STARTED 2026-01-07 01:02:54.438909 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:54.438989 | orchestrator | 2026-01-07 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:57.478710 | orchestrator | 2026-01-07 01:02:57 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:02:57.480308 | orchestrator | 2026-01-07 01:02:57 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:02:57.484799 | orchestrator | 2026-01-07 01:02:57.484884 | orchestrator | 2026-01-07 01:02:57.484892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:02:57.484900 | orchestrator | 2026-01-07 01:02:57.484907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:02:57.484942 | orchestrator | Wednesday 07 January 2026 01:02:07 +0000 (0:00:00.199) 0:00:00.199 ***** 2026-01-07 01:02:57.484949 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.484957 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.484982 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.484989 | orchestrator | 2026-01-07 01:02:57.484994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:02:57.485000 | orchestrator | Wednesday 07 January 2026 01:02:07 +0000 (0:00:00.299) 0:00:00.498 ***** 2026-01-07 01:02:57.485007 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 01:02:57.485014 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 01:02:57.485018 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 01:02:57.485022 | orchestrator | 2026-01-07 01:02:57.485026 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-07 01:02:57.485029 | orchestrator | 2026-01-07 01:02:57.485033 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-07 01:02:57.485037 | orchestrator | Wednesday 07 January 2026 01:02:08 +0000 (0:00:00.616) 0:00:01.115 ***** 2026-01-07 01:02:57.485041 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.485045 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.485048 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.485052 | orchestrator | 2026-01-07 01:02:57.485056 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:57.485060 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:02:57.485066 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:02:57.485070 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:02:57.485074 | orchestrator | 2026-01-07 01:02:57.485077 | orchestrator | 2026-01-07 01:02:57.485081 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:57.485085 | orchestrator | Wednesday 07 January 2026 01:02:08 +0000 (0:00:00.738) 0:00:01.854 ***** 2026-01-07 01:02:57.485089 | orchestrator | =============================================================================== 2026-01-07 01:02:57.485093 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.74s 2026-01-07 01:02:57.485097 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-01-07 01:02:57.485101 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-01-07 01:02:57.485105 | orchestrator | 2026-01-07 01:02:57.485175 | orchestrator | 2026-01-07 01:02:57.485182 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:02:57.485189 | orchestrator | 2026-01-07 01:02:57.485195 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:02:57.485200 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.297) 0:00:00.297 ***** 2026-01-07 01:02:57.485207 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.485212 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.485216 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.485220 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:02:57.485223 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:02:57.485228 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:02:57.485234 | orchestrator | 2026-01-07 01:02:57.485240 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:02:57.485247 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:00.723) 0:00:01.021 ***** 2026-01-07 01:02:57.485256 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-07 01:02:57.485262 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-07 01:02:57.485267 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-07 01:02:57.485273 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-07 01:02:57.485278 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-07 01:02:57.485284 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-07 01:02:57.485309 | orchestrator | 2026-01-07 01:02:57.485595 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-07 01:02:57.485604 | orchestrator | 2026-01-07 01:02:57.485610 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:02:57.485616 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.637) 0:00:01.659 ***** 2026-01-07 01:02:57.485623 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:02:57.485631 | orchestrator | 2026-01-07 01:02:57.485638 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-07 01:02:57.485645 | orchestrator | Wednesday 07 January 2026 00:58:46 +0000 (0:00:01.031) 0:00:02.690 ***** 2026-01-07 01:02:57.485651 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:02:57.485658 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.485664 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.485670 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.485676 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:02:57.485682 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:02:57.485688 | orchestrator | 2026-01-07 01:02:57.485694 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-07 01:02:57.485699 | orchestrator | Wednesday 07 January 2026 00:58:47 +0000 (0:00:01.356) 0:00:04.047 ***** 2026-01-07 01:02:57.485703 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.485707 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.485710 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.485714 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:02:57.485718 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:02:57.485895 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:02:57.485905 | orchestrator | 2026-01-07 01:02:57.485912 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-07 01:02:57.485918 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:01.066) 0:00:05.114 ***** 2026-01-07 01:02:57.485993 | orchestrator | ok: [testbed-node-0] => { 2026-01-07 01:02:57.486001 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486007 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486049 | orchestrator | } 2026-01-07 01:02:57.486057 | orchestrator | ok: [testbed-node-1] => { 2026-01-07 01:02:57.486064 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486070 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486076 | orchestrator | } 2026-01-07 01:02:57.486083 | orchestrator | ok: [testbed-node-2] => { 2026-01-07 01:02:57.486089 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486096 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486102 | orchestrator | } 2026-01-07 01:02:57.486108 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 01:02:57.486114 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486121 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486127 | orchestrator | } 2026-01-07 01:02:57.486133 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 01:02:57.486139 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486145 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486152 | orchestrator | } 2026-01-07 01:02:57.486158 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 01:02:57.486164 | orchestrator |  "changed": false, 2026-01-07 01:02:57.486170 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:02:57.486177 | orchestrator | } 2026-01-07 01:02:57.486183 | orchestrator | 2026-01-07 01:02:57.486189 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-07 01:02:57.486195 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:00.664) 0:00:05.778 ***** 2026-01-07 01:02:57.486202 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486208 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486214 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.486220 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.486226 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.486238 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.486244 | orchestrator | 2026-01-07 01:02:57.486250 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-07 01:02:57.486254 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:00.543) 0:00:06.321 ***** 2026-01-07 01:02:57.486258 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-07 01:02:57.486262 | orchestrator | 2026-01-07 01:02:57.486268 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-01-07 01:02:57.486274 | orchestrator | Wednesday 07 January 2026 00:58:53 +0000 (0:00:03.586) 0:00:09.908 ***** 2026-01-07 01:02:57.486279 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-07 01:02:57.486290 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-07 01:02:57.486297 | orchestrator | 2026-01-07 01:02:57.486303 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-07 01:02:57.486309 | orchestrator | Wednesday 07 January 2026 00:58:59 +0000 (0:00:06.226) 0:00:16.134 ***** 2026-01-07 01:02:57.486314 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:02:57.486321 | orchestrator | 2026-01-07 01:02:57.486327 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-07 01:02:57.486333 | orchestrator | Wednesday 07 January 2026 00:59:02 +0000 (0:00:03.187) 0:00:19.321 ***** 2026-01-07 01:02:57.486339 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:02:57.486346 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-07 01:02:57.486352 | orchestrator | 2026-01-07 01:02:57.486356 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-07 01:02:57.486359 | orchestrator | Wednesday 07 January 2026 00:59:06 +0000 (0:00:03.846) 0:00:23.168 ***** 2026-01-07 01:02:57.486363 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:02:57.486367 | orchestrator | 2026-01-07 01:02:57.486371 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-01-07 01:02:57.486375 | orchestrator | Wednesday 07 January 2026 00:59:10 +0000 (0:00:03.552) 0:00:26.720 ***** 2026-01-07 01:02:57.486378 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-07 01:02:57.486382 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-07 01:02:57.486386 | orchestrator | 2026-01-07 01:02:57.486390 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:02:57.486394 | orchestrator | Wednesday 07 January 2026 00:59:18 +0000 (0:00:07.693) 0:00:34.413 ***** 2026-01-07 01:02:57.486397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486401 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486405 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.486409 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.486413 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.486417 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.486421 | orchestrator | 2026-01-07 01:02:57.486425 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-07 01:02:57.486428 | orchestrator | Wednesday 07 January 2026 00:59:18 +0000 (0:00:00.810) 0:00:35.223 ***** 2026-01-07 01:02:57.486432 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486436 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486440 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.486444 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.486447 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.486451 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.486455 | orchestrator | 2026-01-07 01:02:57.486459 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-07 01:02:57.486463 | orchestrator | Wednesday 07 January 2026 00:59:20 +0000 (0:00:02.061) 0:00:37.285 ***** 2026-01-07 01:02:57.486472 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:57.486476 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:57.486480 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:57.486484 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:02:57.486487 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:02:57.486519 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:02:57.486524 | orchestrator | 2026-01-07 01:02:57.486529 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 01:02:57.486534 | orchestrator | Wednesday 07 January 2026 00:59:21 +0000 (0:00:01.067) 0:00:38.352 ***** 2026-01-07 01:02:57.486543 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486548 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.486553 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486557 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.486562 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.486567 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.486571 | orchestrator | 2026-01-07 01:02:57.486575 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-07 01:02:57.486580 | orchestrator | Wednesday 07 January 2026 00:59:25 +0000 (0:00:03.352) 0:00:41.705 ***** 2026-01-07 01:02:57.486588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486646 | orchestrator | 2026-01-07 01:02:57.486651 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-07 01:02:57.486655 | orchestrator | Wednesday 07 January 2026 00:59:28 +0000 (0:00:02.749) 0:00:44.455 ***** 2026-01-07 01:02:57.486660 | orchestrator | [WARNING]: Skipped 2026-01-07 01:02:57.486664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-07 01:02:57.486669 | orchestrator | due to this access issue: 2026-01-07 01:02:57.486674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-07 01:02:57.486678 | orchestrator | a directory 2026-01-07 01:02:57.486683 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:02:57.486687 | orchestrator | 2026-01-07 01:02:57.486691 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:02:57.486696 | orchestrator | Wednesday 07 January 2026 00:59:28 +0000 (0:00:00.770) 0:00:45.226 ***** 2026-01-07 01:02:57.486701 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:02:57.486707 | orchestrator | 2026-01-07 01:02:57.486711 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-07 01:02:57.486715 | orchestrator | Wednesday 07 January 2026 00:59:29 +0000 (0:00:01.063) 0:00:46.289 ***** 2026-01-07 01:02:57.486720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.486797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.486813 | orchestrator | 2026-01-07 01:02:57.486817 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-07 01:02:57.486821 | orchestrator | Wednesday 07 January 2026 00:59:32 +0000 (0:00:02.900) 0:00:49.190 ***** 2026-01-07 01:02:57.486839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486844 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.486853 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.486856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486864 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486872 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.486892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.486897 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.486901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.486905 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.486908 | orchestrator | 2026-01-07 01:02:57.486912 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-07 01:02:57.486916 | orchestrator | Wednesday 07 January 2026 00:59:36 +0000 (0:00:03.517) 0:00:52.708 ***** 2026-01-07 01:02:57.486920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486954 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.486961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486968 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.486991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.486996 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487045 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487061 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487065 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487068 | orchestrator | 2026-01-07 01:02:57.487072 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-07 01:02:57.487076 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:03.185) 0:00:55.893 ***** 2026-01-07 01:02:57.487080 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487084 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487089 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487095 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487104 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487111 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487117 | orchestrator | 2026-01-07 01:02:57.487123 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-07 01:02:57.487129 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:02.374) 0:00:58.268 ***** 2026-01-07 01:02:57.487135 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487141 | orchestrator | 2026-01-07 01:02:57.487147 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-07 01:02:57.487153 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.120) 0:00:58.388 ***** 2026-01-07 01:02:57.487160 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487167 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487174 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487183 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487189 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487195 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487201 | orchestrator | 2026-01-07 01:02:57.487207 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-07 01:02:57.487212 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.641) 0:00:59.030 ***** 2026-01-07 01:02:57.487241 | orchestrator | skipping: [testbed-node-0] => (it2026-01-07 01:02:57 | INFO  | Task b051f653-dcee-473e-a1ba-57c6431324a4 is in state SUCCESS 2026-01-07 01:02:57.487254 | orchestrator | em={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487267 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487280 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487291 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487303 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487341 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487359 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487364 | orchestrator | 2026-01-07 01:02:57.487370 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-07 01:02:57.487376 | orchestrator | Wednesday 07 January 2026 00:59:45 +0000 (0:00:03.070) 0:01:02.100 ***** 2026-01-07 01:02:57.487383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487452 | orchestrator | 2026-01-07 01:02:57.487458 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-07 01:02:57.487462 | orchestrator | Wednesday 07 January 2026 00:59:49 +0000 (0:00:03.730) 0:01:05.831 ***** 2026-01-07 01:02:57.487466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.487510 | orchestrator | 2026-01-07 01:02:57.487513 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-07 01:02:57.487520 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:06.574) 0:01:12.405 ***** 2026-01-07 01:02:57.487527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487531 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487539 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487547 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.487561 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487572 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487580 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487584 | orchestrator | 2026-01-07 01:02:57.487588 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-07 01:02:57.487592 | orchestrator | Wednesday 07 January 2026 00:59:58 +0000 (0:00:02.536) 0:01:14.941 ***** 2026-01-07 01:02:57.487596 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487600 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487604 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487608 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:57.487613 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:57.487619 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:57.487625 | orchestrator | 2026-01-07 01:02:57.487631 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-07 01:02:57.487637 | orchestrator | Wednesday 07 January 2026 01:00:01 +0000 (0:00:02.902) 0:01:17.844 ***** 2026-01-07 01:02:57.487643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487650 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487667 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.487690 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.487724 | orchestrator | 2026-01-07 01:02:57.487731 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-07 01:02:57.487737 | orchestrator | Wednesday 07 January 2026 01:00:05 +0000 (0:00:03.571) 0:01:21.416 ***** 2026-01-07 01:02:57.487745 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487752 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487759 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487767 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487773 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487780 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487787 | orchestrator | 2026-01-07 01:02:57.487794 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-07 01:02:57.487800 | orchestrator | Wednesday 07 January 2026 01:00:07 +0000 (0:00:02.484) 0:01:23.900 ***** 2026-01-07 01:02:57.487807 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487813 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487822 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487829 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487835 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487841 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487847 | orchestrator | 2026-01-07 01:02:57.487853 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-07 01:02:57.487879 | orchestrator | Wednesday 07 January 2026 01:00:09 +0000 (0:00:02.310) 0:01:26.210 ***** 2026-01-07 01:02:57.487895 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.487903 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487910 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.487917 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.487960 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.487968 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.487975 | orchestrator | 2026-01-07 01:02:57.487981 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-07 01:02:57.487987 | orchestrator | Wednesday 07 January 2026 01:00:11 +0000 (0:00:02.062) 0:01:28.273 ***** 2026-01-07 01:02:57.487994 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.487999 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488006 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488013 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488019 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488025 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488032 | orchestrator | 2026-01-07 01:02:57.488038 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-07 01:02:57.488043 | orchestrator | Wednesday 07 January 2026 01:00:13 +0000 (0:00:01.989) 0:01:30.263 ***** 2026-01-07 01:02:57.488048 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488053 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488058 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488063 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488067 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488071 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488074 | orchestrator | 2026-01-07 01:02:57.488078 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-07 01:02:57.488082 | orchestrator | Wednesday 07 January 2026 01:00:16 +0000 (0:00:02.156) 0:01:32.419 ***** 2026-01-07 01:02:57.488086 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488090 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488093 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488104 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488108 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488111 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488115 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488119 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488123 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488126 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488130 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:02:57.488134 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488138 | orchestrator | 2026-01-07 01:02:57.488142 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-07 01:02:57.488146 | orchestrator | Wednesday 07 January 2026 01:00:18 +0000 (0:00:02.053) 0:01:34.473 ***** 2026-01-07 01:02:57.488150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488155 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488175 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488188 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488197 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488205 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488213 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488217 | orchestrator | 2026-01-07 01:02:57.488221 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-07 01:02:57.488224 | orchestrator | Wednesday 07 January 2026 01:00:19 +0000 (0:00:01.695) 0:01:36.168 ***** 2026-01-07 01:02:57.488235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488244 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488252 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488262 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488275 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488300 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488317 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488323 | orchestrator | 2026-01-07 01:02:57.488329 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-07 01:02:57.488335 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:01.909) 0:01:38.078 ***** 2026-01-07 01:02:57.488341 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488347 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488350 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488354 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488358 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488362 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488366 | orchestrator | 2026-01-07 01:02:57.488369 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-07 01:02:57.488373 | orchestrator | Wednesday 07 January 2026 01:00:23 +0000 (0:00:01.655) 0:01:39.734 ***** 2026-01-07 01:02:57.488377 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488381 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488384 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488388 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:02:57.488392 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:02:57.488396 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:02:57.488399 | orchestrator | 2026-01-07 01:02:57.488403 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-07 01:02:57.488407 | orchestrator | Wednesday 07 January 2026 01:00:26 +0000 (0:00:03.175) 0:01:42.909 ***** 2026-01-07 01:02:57.488410 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488414 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488418 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488422 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488425 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488429 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488433 | orchestrator | 2026-01-07 01:02:57.488436 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-07 01:02:57.488440 | orchestrator | Wednesday 07 January 2026 01:00:29 +0000 (0:00:02.464) 0:01:45.374 ***** 2026-01-07 01:02:57.488444 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488448 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488452 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488455 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488459 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488463 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488466 | orchestrator | 2026-01-07 01:02:57.488470 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-07 01:02:57.488474 | orchestrator | Wednesday 07 January 2026 01:00:30 +0000 (0:00:01.971) 0:01:47.345 ***** 2026-01-07 01:02:57.488478 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488481 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488485 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488489 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488493 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488496 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488500 | orchestrator | 2026-01-07 01:02:57.488504 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-07 01:02:57.488512 | orchestrator | Wednesday 07 January 2026 01:00:33 +0000 (0:00:02.107) 0:01:49.452 ***** 2026-01-07 01:02:57.488516 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488519 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488523 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488527 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488531 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488535 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488538 | orchestrator | 2026-01-07 01:02:57.488542 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-07 01:02:57.488546 | orchestrator | Wednesday 07 January 2026 01:00:35 +0000 (0:00:02.243) 0:01:51.696 ***** 2026-01-07 01:02:57.488549 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488553 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488557 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488561 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488564 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488568 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488572 | orchestrator | 2026-01-07 01:02:57.488580 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-07 01:02:57.488583 | orchestrator | Wednesday 07 January 2026 01:00:38 +0000 (0:00:03.026) 0:01:54.723 ***** 2026-01-07 01:02:57.488587 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488591 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488599 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488602 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488606 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488610 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488614 | orchestrator | 2026-01-07 01:02:57.488618 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-07 01:02:57.488622 | orchestrator | Wednesday 07 January 2026 01:00:40 +0000 (0:00:01.761) 0:01:56.484 ***** 2026-01-07 01:02:57.488625 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488629 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488633 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488636 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488640 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488644 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488647 | orchestrator | 2026-01-07 01:02:57.488651 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-07 01:02:57.488655 | orchestrator | Wednesday 07 January 2026 01:00:41 +0000 (0:00:01.720) 0:01:58.204 ***** 2026-01-07 01:02:57.488659 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488664 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488668 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488672 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488675 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488679 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488683 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488687 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488690 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488694 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488698 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:02:57.488702 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488706 | orchestrator | 2026-01-07 01:02:57.488709 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-07 01:02:57.488717 | orchestrator | Wednesday 07 January 2026 01:00:43 +0000 (0:00:01.649) 0:01:59.854 ***** 2026-01-07 01:02:57.488721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488725 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.488729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488733 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488748 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488759 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488767 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.488771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488775 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.488779 | orchestrator | 2026-01-07 01:02:57.488783 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-01-07 01:02:57.488786 | orchestrator | Wednesday 07 January 2026 01:00:45 +0000 (0:00:01.823) 0:02:01.679 ***** 2026-01-07 01:02:57.488797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.488802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.488809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:02:57.488813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.488817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.488827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:02:57.488832 | orchestrator | 2026-01-07 01:02:57.488835 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-01-07 01:02:57.488839 | orchestrator | Wednesday 07 January 2026 01:00:47 +0000 (0:00:02.152) 0:02:03.831 ***** 2026-01-07 01:02:57.488843 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:02:57.488847 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488851 | orchestrator | } 2026-01-07 01:02:57.488855 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:02:57.488858 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488866 | orchestrator | } 2026-01-07 01:02:57.488869 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:02:57.488873 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488877 | orchestrator | } 2026-01-07 01:02:57.488881 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 01:02:57.488885 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488888 | orchestrator | } 2026-01-07 01:02:57.488892 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 01:02:57.488896 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488900 | orchestrator | } 2026-01-07 01:02:57.488903 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 01:02:57.488907 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:02:57.488911 | orchestrator | } 2026-01-07 01:02:57.488914 | orchestrator | 2026-01-07 01:02:57.488918 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:02:57.488956 | orchestrator | Wednesday 07 January 2026 01:00:48 +0000 (0:00:00.740) 0:02:04.572 ***** 2026-01-07 01:02:57.488963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488967 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.488971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.488975 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.488988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.488992 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.488996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:02:57.489008 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.489012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.489016 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.489020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:02:57.489024 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.489028 | orchestrator | 2026-01-07 01:02:57.489031 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:02:57.489035 | orchestrator | Wednesday 07 January 2026 01:00:52 +0000 (0:00:03.861) 0:02:08.433 ***** 2026-01-07 01:02:57.489039 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:57.489043 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:57.489047 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:57.489050 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:02:57.489054 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:02:57.489058 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:02:57.489062 | orchestrator | 2026-01-07 01:02:57.489065 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-07 01:02:57.489069 | orchestrator | Wednesday 07 January 2026 01:00:52 +0000 (0:00:00.465) 0:02:08.899 ***** 2026-01-07 01:02:57.489073 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:57.489077 | orchestrator | 2026-01-07 01:02:57.489081 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-07 01:02:57.489084 | orchestrator | Wednesday 07 January 2026 01:00:55 +0000 (0:00:02.839) 0:02:11.739 ***** 2026-01-07 01:02:57.489088 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:57.489092 | orchestrator | 2026-01-07 01:02:57.489099 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-07 01:02:57.489103 | orchestrator | Wednesday 07 January 2026 01:00:57 +0000 (0:00:02.357) 0:02:14.096 ***** 2026-01-07 01:02:57.489107 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:57.489111 | orchestrator | 2026-01-07 01:02:57.489115 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489118 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:40.755) 0:02:54.852 ***** 2026-01-07 01:02:57.489122 | orchestrator | 2026-01-07 01:02:57.489128 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489132 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.080) 0:02:54.933 ***** 2026-01-07 01:02:57.489136 | orchestrator | 2026-01-07 01:02:57.489140 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489147 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.067) 0:02:55.001 ***** 2026-01-07 01:02:57.489150 | orchestrator | 2026-01-07 01:02:57.489154 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489158 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.245) 0:02:55.247 ***** 2026-01-07 01:02:57.489162 | orchestrator | 2026-01-07 01:02:57.489166 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489170 | orchestrator | Wednesday 07 January 2026 01:01:38 +0000 (0:00:00.063) 0:02:55.310 ***** 2026-01-07 01:02:57.489173 | orchestrator | 2026-01-07 01:02:57.489177 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:02:57.489181 | orchestrator | Wednesday 07 January 2026 01:01:39 +0000 (0:00:00.062) 0:02:55.373 ***** 2026-01-07 01:02:57.489185 | orchestrator | 2026-01-07 01:02:57.489188 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-07 01:02:57.489192 | orchestrator | Wednesday 07 January 2026 01:01:39 +0000 (0:00:00.064) 0:02:55.438 ***** 2026-01-07 01:02:57.489196 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:57.489200 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:57.489204 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:57.489208 | orchestrator | 2026-01-07 01:02:57.489211 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-07 01:02:57.489215 | orchestrator | Wednesday 07 January 2026 01:02:06 +0000 (0:00:27.682) 0:03:23.121 ***** 2026-01-07 01:02:57.489219 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:02:57.489223 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:02:57.489227 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:02:57.489230 | orchestrator | 2026-01-07 01:02:57.489234 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:57.489239 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:02:57.489244 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:02:57.489248 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:02:57.489252 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:02:57.489256 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:02:57.489259 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:02:57.489263 | orchestrator | 2026-01-07 01:02:57.489267 | orchestrator | 2026-01-07 01:02:57.489271 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:57.489280 | orchestrator | Wednesday 07 January 2026 01:02:55 +0000 (0:00:48.660) 0:04:11.781 ***** 2026-01-07 01:02:57.489284 | orchestrator | =============================================================================== 2026-01-07 01:02:57.489287 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 48.66s 2026-01-07 01:02:57.489291 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.76s 2026-01-07 01:02:57.489295 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.68s 2026-01-07 01:02:57.489299 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.69s 2026-01-07 01:02:57.489303 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.57s 2026-01-07 01:02:57.489307 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 6.23s 2026-01-07 01:02:57.489310 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.86s 2026-01-07 01:02:57.489314 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.85s 2026-01-07 01:02:57.489318 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.73s 2026-01-07 01:02:57.489322 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.59s 2026-01-07 01:02:57.489326 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.57s 2026-01-07 01:02:57.489330 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.55s 2026-01-07 01:02:57.489334 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.52s 2026-01-07 01:02:57.489338 | orchestrator | Setting sysctl values --------------------------------------------------- 3.35s 2026-01-07 01:02:57.489342 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.19s 2026-01-07 01:02:57.489345 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.19s 2026-01-07 01:02:57.489349 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.18s 2026-01-07 01:02:57.489353 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.07s 2026-01-07 01:02:57.489360 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.03s 2026-01-07 01:02:57.489364 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.90s 2026-01-07 01:02:57.489371 | orchestrator | 2026-01-07 01:02:57 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:02:57.489375 | orchestrator | 2026-01-07 01:02:57 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:02:57.489379 | orchestrator | 2026-01-07 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:00.533131 | orchestrator | 2026-01-07 01:03:00 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:00.537112 | orchestrator | 2026-01-07 01:03:00 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:00.537734 | orchestrator | 2026-01-07 01:03:00 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:00.539172 | orchestrator | 2026-01-07 01:03:00 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:00.539209 | orchestrator | 2026-01-07 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:03.574399 | orchestrator | 2026-01-07 01:03:03 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:03.575047 | orchestrator | 2026-01-07 01:03:03 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:03.575582 | orchestrator | 2026-01-07 01:03:03 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:03.576534 | orchestrator | 2026-01-07 01:03:03 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:03.576661 | orchestrator | 2026-01-07 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:06.621074 | orchestrator | 2026-01-07 01:03:06 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:06.622551 | orchestrator | 2026-01-07 01:03:06 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:06.624646 | orchestrator | 2026-01-07 01:03:06 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:06.626312 | orchestrator | 2026-01-07 01:03:06 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:06.626364 | orchestrator | 2026-01-07 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:09.661568 | orchestrator | 2026-01-07 01:03:09 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:09.662196 | orchestrator | 2026-01-07 01:03:09 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:09.663068 | orchestrator | 2026-01-07 01:03:09 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:09.664114 | orchestrator | 2026-01-07 01:03:09 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:09.664139 | orchestrator | 2026-01-07 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:12.707863 | orchestrator | 2026-01-07 01:03:12 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:12.709265 | orchestrator | 2026-01-07 01:03:12 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:12.711874 | orchestrator | 2026-01-07 01:03:12 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:12.713509 | orchestrator | 2026-01-07 01:03:12 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:12.713737 | orchestrator | 2026-01-07 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:15.746428 | orchestrator | 2026-01-07 01:03:15 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:15.748531 | orchestrator | 2026-01-07 01:03:15 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:15.750436 | orchestrator | 2026-01-07 01:03:15 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:15.751958 | orchestrator | 2026-01-07 01:03:15 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:15.752161 | orchestrator | 2026-01-07 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:18.786092 | orchestrator | 2026-01-07 01:03:18 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:18.788397 | orchestrator | 2026-01-07 01:03:18 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:18.790786 | orchestrator | 2026-01-07 01:03:18 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:18.793061 | orchestrator | 2026-01-07 01:03:18 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:18.793393 | orchestrator | 2026-01-07 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:21.846179 | orchestrator | 2026-01-07 01:03:21 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:21.852423 | orchestrator | 2026-01-07 01:03:21 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:21.852489 | orchestrator | 2026-01-07 01:03:21 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:21.852494 | orchestrator | 2026-01-07 01:03:21 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:21.852499 | orchestrator | 2026-01-07 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:24.898258 | orchestrator | 2026-01-07 01:03:24 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:24.899612 | orchestrator | 2026-01-07 01:03:24 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:24.901160 | orchestrator | 2026-01-07 01:03:24 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:24.902588 | orchestrator | 2026-01-07 01:03:24 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:24.902642 | orchestrator | 2026-01-07 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:27.947025 | orchestrator | 2026-01-07 01:03:27 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:27.947957 | orchestrator | 2026-01-07 01:03:27 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:27.950595 | orchestrator | 2026-01-07 01:03:27 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:27.952325 | orchestrator | 2026-01-07 01:03:27 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:27.952362 | orchestrator | 2026-01-07 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:30.997390 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:30.999220 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:31.001245 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state STARTED 2026-01-07 01:03:31.003537 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:31.003581 | orchestrator | 2026-01-07 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:34.061663 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:34.064205 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:34.066218 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:34.069695 | orchestrator | 2026-01-07 01:03:34.069775 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task 354e666e-500c-49e6-8d71-ce0856d7cb72 is in state SUCCESS 2026-01-07 01:03:34.071159 | orchestrator | 2026-01-07 01:03:34.071283 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:03:34.071298 | orchestrator | 2026-01-07 01:03:34.071304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:03:34.071315 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-01-07 01:03:34.071321 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:34.071333 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:34.071360 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:34.071366 | orchestrator | 2026-01-07 01:03:34.071373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:03:34.071379 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.306) 0:00:00.572 ***** 2026-01-07 01:03:34.071386 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-07 01:03:34.071418 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-07 01:03:34.071425 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-07 01:03:34.071431 | orchestrator | 2026-01-07 01:03:34.071438 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-07 01:03:34.071444 | orchestrator | 2026-01-07 01:03:34.071451 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:03:34.071457 | orchestrator | Wednesday 07 January 2026 01:01:48 +0000 (0:00:00.429) 0:00:01.002 ***** 2026-01-07 01:03:34.071464 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:03:34.071471 | orchestrator | 2026-01-07 01:03:34.071478 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-07 01:03:34.071499 | orchestrator | Wednesday 07 January 2026 01:01:49 +0000 (0:00:00.550) 0:00:01.553 ***** 2026-01-07 01:03:34.071504 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-07 01:03:34.071508 | orchestrator | 2026-01-07 01:03:34.071512 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-01-07 01:03:34.071516 | orchestrator | Wednesday 07 January 2026 01:01:53 +0000 (0:00:03.986) 0:00:05.539 ***** 2026-01-07 01:03:34.071520 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-07 01:03:34.071524 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-07 01:03:34.071528 | orchestrator | 2026-01-07 01:03:34.071532 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-07 01:03:34.071535 | orchestrator | Wednesday 07 January 2026 01:01:59 +0000 (0:00:06.126) 0:00:11.665 ***** 2026-01-07 01:03:34.071539 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:03:34.071544 | orchestrator | 2026-01-07 01:03:34.071547 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-07 01:03:34.071551 | orchestrator | Wednesday 07 January 2026 01:02:02 +0000 (0:00:03.094) 0:00:14.760 ***** 2026-01-07 01:03:34.071555 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:03:34.071559 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-07 01:03:34.071563 | orchestrator | 2026-01-07 01:03:34.071567 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-07 01:03:34.071571 | orchestrator | Wednesday 07 January 2026 01:02:06 +0000 (0:00:03.622) 0:00:18.382 ***** 2026-01-07 01:03:34.071575 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:03:34.071579 | orchestrator | 2026-01-07 01:03:34.071582 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-01-07 01:03:34.071586 | orchestrator | Wednesday 07 January 2026 01:02:09 +0000 (0:00:03.039) 0:00:21.422 ***** 2026-01-07 01:03:34.071590 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-07 01:03:34.071594 | orchestrator | 2026-01-07 01:03:34.071598 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-07 01:03:34.071602 | orchestrator | Wednesday 07 January 2026 01:02:13 +0000 (0:00:03.953) 0:00:25.375 ***** 2026-01-07 01:03:34.071606 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.071609 | orchestrator | 2026-01-07 01:03:34.071613 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-07 01:03:34.071617 | orchestrator | Wednesday 07 January 2026 01:02:17 +0000 (0:00:04.101) 0:00:29.476 ***** 2026-01-07 01:03:34.071621 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.071787 | orchestrator | 2026-01-07 01:03:34.071791 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-07 01:03:34.071795 | orchestrator | Wednesday 07 January 2026 01:02:20 +0000 (0:00:03.334) 0:00:32.810 ***** 2026-01-07 01:03:34.071799 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.071803 | orchestrator | 2026-01-07 01:03:34.071807 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-07 01:03:34.071821 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:03.186) 0:00:35.997 ***** 2026-01-07 01:03:34.071865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.071882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.071891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.071899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.071906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.071926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.071934 | orchestrator | 2026-01-07 01:03:34.071941 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-07 01:03:34.071947 | orchestrator | Wednesday 07 January 2026 01:02:25 +0000 (0:00:01.620) 0:00:37.618 ***** 2026-01-07 01:03:34.071954 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.071961 | orchestrator | 2026-01-07 01:03:34.071966 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-07 01:03:34.071972 | orchestrator | Wednesday 07 January 2026 01:02:25 +0000 (0:00:00.303) 0:00:37.922 ***** 2026-01-07 01:03:34.071977 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.071984 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.071990 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.071996 | orchestrator | 2026-01-07 01:03:34.072003 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-07 01:03:34.072008 | orchestrator | Wednesday 07 January 2026 01:02:26 +0000 (0:00:00.992) 0:00:38.914 ***** 2026-01-07 01:03:34.072014 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:03:34.072020 | orchestrator | 2026-01-07 01:03:34.072028 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-07 01:03:34.072032 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:00.963) 0:00:39.877 ***** 2026-01-07 01:03:34.072036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072079 | orchestrator | 2026-01-07 01:03:34.072083 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-07 01:03:34.072087 | orchestrator | Wednesday 07 January 2026 01:02:30 +0000 (0:00:03.252) 0:00:43.129 ***** 2026-01-07 01:03:34.072090 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:03:34.072094 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:03:34.072098 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:03:34.072102 | orchestrator | 2026-01-07 01:03:34.072106 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:03:34.072110 | orchestrator | Wednesday 07 January 2026 01:02:31 +0000 (0:00:00.577) 0:00:43.707 ***** 2026-01-07 01:03:34.072114 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:03:34.072118 | orchestrator | 2026-01-07 01:03:34.072122 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-07 01:03:34.072126 | orchestrator | Wednesday 07 January 2026 01:02:32 +0000 (0:00:00.892) 0:00:44.599 ***** 2026-01-07 01:03:34.072133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072167 | orchestrator | 2026-01-07 01:03:34.072171 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-07 01:03:34.072175 | orchestrator | Wednesday 07 January 2026 01:02:34 +0000 (0:00:02.192) 0:00:46.792 ***** 2026-01-07 01:03:34.072182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072193 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.072198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072206 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.072214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072232 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.072236 | orchestrator | 2026-01-07 01:03:34.072240 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-07 01:03:34.072244 | orchestrator | Wednesday 07 January 2026 01:02:35 +0000 (0:00:00.841) 0:00:47.633 ***** 2026-01-07 01:03:34.072248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072266 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.072272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072282 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.072286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072294 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.072298 | orchestrator | 2026-01-07 01:03:34.072302 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-07 01:03:34.072306 | orchestrator | Wednesday 07 January 2026 01:02:36 +0000 (0:00:01.133) 0:00:48.767 ***** 2026-01-07 01:03:34.072313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072349 | orchestrator | 2026-01-07 01:03:34.072425 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-07 01:03:34.072429 | orchestrator | Wednesday 07 January 2026 01:02:38 +0000 (0:00:02.055) 0:00:50.823 ***** 2026-01-07 01:03:34.072436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072477 | orchestrator | 2026-01-07 01:03:34.072482 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-07 01:03:34.072485 | orchestrator | Wednesday 07 January 2026 01:02:43 +0000 (0:00:05.349) 0:00:56.172 ***** 2026-01-07 01:03:34.072489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072498 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.072506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072521 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.072525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.072537 | orchestrator | 2026-01-07 01:03:34.072541 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-01-07 01:03:34.072546 | orchestrator | Wednesday 07 January 2026 01:02:44 +0000 (0:00:00.537) 0:00:56.710 ***** 2026-01-07 01:03:34.072555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:03:34.072586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:03:34.072618 | orchestrator | 2026-01-07 01:03:34.072624 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-01-07 01:03:34.072630 | orchestrator | Wednesday 07 January 2026 01:02:46 +0000 (0:00:02.034) 0:00:58.745 ***** 2026-01-07 01:03:34.072635 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:03:34.072641 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:03:34.072647 | orchestrator | } 2026-01-07 01:03:34.072653 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:03:34.072660 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:03:34.072666 | orchestrator | } 2026-01-07 01:03:34.072672 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:03:34.072678 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:03:34.072685 | orchestrator | } 2026-01-07 01:03:34.072691 | orchestrator | 2026-01-07 01:03:34.072697 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:03:34.072703 | orchestrator | Wednesday 07 January 2026 01:02:46 +0000 (0:00:00.290) 0:00:59.035 ***** 2026-01-07 01:03:34.072735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072750 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.072756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072776 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.072782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:03:34.072787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:03:34.072791 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.072795 | orchestrator | 2026-01-07 01:03:34.072798 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:03:34.072802 | orchestrator | Wednesday 07 January 2026 01:02:47 +0000 (0:00:00.704) 0:00:59.740 ***** 2026-01-07 01:03:34.072806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:03:34.072810 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:03:34.072814 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:03:34.072818 | orchestrator | 2026-01-07 01:03:34.072822 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-07 01:03:34.072825 | orchestrator | Wednesday 07 January 2026 01:02:47 +0000 (0:00:00.370) 0:01:00.111 ***** 2026-01-07 01:03:34.072829 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.072833 | orchestrator | 2026-01-07 01:03:34.072837 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-07 01:03:34.072841 | orchestrator | Wednesday 07 January 2026 01:02:49 +0000 (0:00:02.051) 0:01:02.162 ***** 2026-01-07 01:03:34.072844 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.072935 | orchestrator | 2026-01-07 01:03:34.072941 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-07 01:03:34.072949 | orchestrator | Wednesday 07 January 2026 01:02:52 +0000 (0:00:02.094) 0:01:04.257 ***** 2026-01-07 01:03:34.072953 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.072957 | orchestrator | 2026-01-07 01:03:34.072961 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:03:34.072967 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:16.510) 0:01:20.767 ***** 2026-01-07 01:03:34.072973 | orchestrator | 2026-01-07 01:03:34.072978 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:03:34.072989 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:00.062) 0:01:20.830 ***** 2026-01-07 01:03:34.072997 | orchestrator | 2026-01-07 01:03:34.073002 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:03:34.073008 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:00.072) 0:01:20.902 ***** 2026-01-07 01:03:34.073014 | orchestrator | 2026-01-07 01:03:34.073020 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-07 01:03:34.073026 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:00.065) 0:01:20.968 ***** 2026-01-07 01:03:34.073031 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.073037 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:34.073044 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:34.073050 | orchestrator | 2026-01-07 01:03:34.073057 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-07 01:03:34.073063 | orchestrator | Wednesday 07 January 2026 01:03:21 +0000 (0:00:12.484) 0:01:33.453 ***** 2026-01-07 01:03:34.073069 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:03:34.073081 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:03:34.073086 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:03:34.073090 | orchestrator | 2026-01-07 01:03:34.073095 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:03:34.073099 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:03:34.073105 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:03:34.073109 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:03:34.073113 | orchestrator | 2026-01-07 01:03:34.073117 | orchestrator | 2026-01-07 01:03:34.073121 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:03:34.073125 | orchestrator | Wednesday 07 January 2026 01:03:31 +0000 (0:00:09.953) 0:01:43.406 ***** 2026-01-07 01:03:34.073130 | orchestrator | =============================================================================== 2026-01-07 01:03:34.073133 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.51s 2026-01-07 01:03:34.073138 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.49s 2026-01-07 01:03:34.073142 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.95s 2026-01-07 01:03:34.073158 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.13s 2026-01-07 01:03:34.073168 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.35s 2026-01-07 01:03:34.073172 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.10s 2026-01-07 01:03:34.073176 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.99s 2026-01-07 01:03:34.073180 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 3.95s 2026-01-07 01:03:34.073184 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.62s 2026-01-07 01:03:34.073188 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.33s 2026-01-07 01:03:34.073197 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.25s 2026-01-07 01:03:34.073201 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.19s 2026-01-07 01:03:34.073205 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.09s 2026-01-07 01:03:34.073209 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.04s 2026-01-07 01:03:34.073213 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.19s 2026-01-07 01:03:34.073217 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.09s 2026-01-07 01:03:34.073221 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.06s 2026-01-07 01:03:34.073224 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.05s 2026-01-07 01:03:34.073228 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.03s 2026-01-07 01:03:34.073232 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.62s 2026-01-07 01:03:34.073236 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state STARTED 2026-01-07 01:03:34.073241 | orchestrator | 2026-01-07 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:37.123115 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:37.127153 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:37.131758 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:37.133155 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task 236707d4-c21c-4d4a-aa46-4e28226d070b is in state SUCCESS 2026-01-07 01:03:37.135002 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:37.135133 | orchestrator | 2026-01-07 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:40.189580 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:40.192251 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:40.193561 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:40.194823 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:40.195566 | orchestrator | 2026-01-07 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:43.250534 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:43.251496 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:43.252365 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:43.253551 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:43.253596 | orchestrator | 2026-01-07 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:46.285384 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:46.285451 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:46.286987 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:46.287132 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:46.287142 | orchestrator | 2026-01-07 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:49.319276 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:49.322492 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:49.324329 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:49.326207 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:49.326260 | orchestrator | 2026-01-07 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:52.372336 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:52.374486 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:52.377712 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:52.380156 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:52.380318 | orchestrator | 2026-01-07 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:55.419443 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:55.419504 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:55.420308 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:55.421395 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:55.421475 | orchestrator | 2026-01-07 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:58.462945 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:03:58.464488 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:03:58.467194 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:03:58.469780 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:03:58.469927 | orchestrator | 2026-01-07 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:01.506561 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:01.507189 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:01.508491 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:01.509335 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:01.509375 | orchestrator | 2026-01-07 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:04.559219 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:04.559750 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:04.560676 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:04.561560 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:04.561644 | orchestrator | 2026-01-07 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:07.595564 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:07.597212 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:07.599230 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:07.601403 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:07.601474 | orchestrator | 2026-01-07 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:10.643160 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:10.643427 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:10.644882 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:10.649020 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:10.649098 | orchestrator | 2026-01-07 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:13.680363 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:13.682062 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:13.684620 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:13.686007 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:13.686098 | orchestrator | 2026-01-07 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:16.715828 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:16.716086 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:16.717381 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:16.719114 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:16.719163 | orchestrator | 2026-01-07 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:19.751098 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:19.751831 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state STARTED 2026-01-07 01:04:19.752977 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:19.753842 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:19.753910 | orchestrator | 2026-01-07 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:22.792084 | orchestrator | 2026-01-07 01:04:22 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:22.792256 | orchestrator | 2026-01-07 01:04:22 | INFO  | Task cf136761-a0de-4901-acb4-e4251ff00343 is in state SUCCESS 2026-01-07 01:04:22.792530 | orchestrator | 2026-01-07 01:04:22.792552 | orchestrator | 2026-01-07 01:04:22.792559 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:04:22.792565 | orchestrator | 2026-01-07 01:04:22.792571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:04:22.792577 | orchestrator | Wednesday 07 January 2026 01:03:00 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-01-07 01:04:22.792581 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:04:22.792585 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:04:22.792588 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:04:22.792592 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:04:22.792595 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:04:22.792598 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:04:22.792601 | orchestrator | ok: [testbed-manager] 2026-01-07 01:04:22.792604 | orchestrator | 2026-01-07 01:04:22.792608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:04:22.792611 | orchestrator | Wednesday 07 January 2026 01:03:01 +0000 (0:00:00.821) 0:00:01.078 ***** 2026-01-07 01:04:22.792614 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792618 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792621 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792624 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792630 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792634 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792639 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-07 01:04:22.792644 | orchestrator | 2026-01-07 01:04:22.792650 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 01:04:22.792655 | orchestrator | 2026-01-07 01:04:22.792660 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-07 01:04:22.792666 | orchestrator | Wednesday 07 January 2026 01:03:02 +0000 (0:00:00.736) 0:00:01.815 ***** 2026-01-07 01:04:22.792672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-07 01:04:22.792677 | orchestrator | 2026-01-07 01:04:22.792680 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-01-07 01:04:22.792683 | orchestrator | Wednesday 07 January 2026 01:03:04 +0000 (0:00:01.906) 0:00:03.722 ***** 2026-01-07 01:04:22.792686 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-01-07 01:04:22.792689 | orchestrator | 2026-01-07 01:04:22.792700 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-01-07 01:04:22.792704 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:03.968) 0:00:07.690 ***** 2026-01-07 01:04:22.792708 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-07 01:04:22.792712 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-07 01:04:22.792715 | orchestrator | 2026-01-07 01:04:22.792718 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-07 01:04:22.792721 | orchestrator | Wednesday 07 January 2026 01:03:14 +0000 (0:00:06.273) 0:00:13.963 ***** 2026-01-07 01:04:22.792725 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:04:22.792731 | orchestrator | 2026-01-07 01:04:22.792736 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-07 01:04:22.792742 | orchestrator | Wednesday 07 January 2026 01:03:17 +0000 (0:00:03.020) 0:00:16.984 ***** 2026-01-07 01:04:22.792770 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:04:22.792776 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-01-07 01:04:22.792779 | orchestrator | 2026-01-07 01:04:22.792783 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-07 01:04:22.792786 | orchestrator | Wednesday 07 January 2026 01:03:22 +0000 (0:00:04.450) 0:00:21.435 ***** 2026-01-07 01:04:22.792791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:04:22.792797 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-01-07 01:04:22.792802 | orchestrator | 2026-01-07 01:04:22.792807 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-01-07 01:04:22.792813 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:07.634) 0:00:29.070 ***** 2026-01-07 01:04:22.792818 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-01-07 01:04:22.792823 | orchestrator | 2026-01-07 01:04:22.792829 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:04:22.792834 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792840 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792845 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792848 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792851 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792859 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792869 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.792872 | orchestrator | 2026-01-07 01:04:22.792880 | orchestrator | 2026-01-07 01:04:22.792886 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:04:22.792891 | orchestrator | Wednesday 07 January 2026 01:03:35 +0000 (0:00:05.305) 0:00:34.376 ***** 2026-01-07 01:04:22.792896 | orchestrator | =============================================================================== 2026-01-07 01:04:22.792901 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.63s 2026-01-07 01:04:22.792906 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 6.27s 2026-01-07 01:04:22.792911 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.31s 2026-01-07 01:04:22.792916 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.45s 2026-01-07 01:04:22.792922 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 3.97s 2026-01-07 01:04:22.792927 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.02s 2026-01-07 01:04:22.792933 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.91s 2026-01-07 01:04:22.792938 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-01-07 01:04:22.792944 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-01-07 01:04:22.792949 | orchestrator | 2026-01-07 01:04:22.792956 | orchestrator | 2026-01-07 01:04:22.792959 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-07 01:04:22.792962 | orchestrator | 2026-01-07 01:04:22.792965 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-07 01:04:22.792968 | orchestrator | Wednesday 07 January 2026 00:58:43 +0000 (0:00:00.121) 0:00:00.121 ***** 2026-01-07 01:04:22.792976 | orchestrator | changed: [localhost] 2026-01-07 01:04:22.792979 | orchestrator | 2026-01-07 01:04:22.792982 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-07 01:04:22.792986 | orchestrator | Wednesday 07 January 2026 00:58:44 +0000 (0:00:01.095) 0:00:01.217 ***** 2026-01-07 01:04:22.792991 | orchestrator | 2026-01-07 01:04:22.792996 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793002 | orchestrator | 2026-01-07 01:04:22.793007 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793012 | orchestrator | 2026-01-07 01:04:22.793022 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793028 | orchestrator | 2026-01-07 01:04:22.793033 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793038 | orchestrator | 2026-01-07 01:04:22.793044 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793049 | orchestrator | 2026-01-07 01:04:22.793055 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:04:22.793118 | orchestrator | changed: [localhost] 2026-01-07 01:04:22.793124 | orchestrator | 2026-01-07 01:04:22.793127 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-07 01:04:22.793130 | orchestrator | Wednesday 07 January 2026 01:04:04 +0000 (0:05:19.826) 0:05:21.044 ***** 2026-01-07 01:04:22.793133 | orchestrator | changed: [localhost] 2026-01-07 01:04:22.793136 | orchestrator | 2026-01-07 01:04:22.793140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:04:22.793143 | orchestrator | 2026-01-07 01:04:22.793146 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:04:22.793149 | orchestrator | Wednesday 07 January 2026 01:04:18 +0000 (0:00:13.561) 0:05:34.605 ***** 2026-01-07 01:04:22.793152 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:04:22.793156 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:04:22.793159 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:04:22.793162 | orchestrator | 2026-01-07 01:04:22.793165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:04:22.793168 | orchestrator | Wednesday 07 January 2026 01:04:18 +0000 (0:00:00.384) 0:05:34.990 ***** 2026-01-07 01:04:22.793171 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-07 01:04:22.793175 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-07 01:04:22.793178 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-07 01:04:22.793181 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-07 01:04:22.793184 | orchestrator | 2026-01-07 01:04:22.793187 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-07 01:04:22.793190 | orchestrator | skipping: no hosts matched 2026-01-07 01:04:22.793194 | orchestrator | 2026-01-07 01:04:22.793197 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:04:22.793200 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.793203 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.793206 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.793209 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:22.793213 | orchestrator | 2026-01-07 01:04:22.793216 | orchestrator | 2026-01-07 01:04:22.793219 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:04:22.793222 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:00.911) 0:05:35.902 ***** 2026-01-07 01:04:22.793229 | orchestrator | =============================================================================== 2026-01-07 01:04:22.793236 | orchestrator | Download ironic-agent initramfs --------------------------------------- 319.83s 2026-01-07 01:04:22.793239 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.56s 2026-01-07 01:04:22.793242 | orchestrator | Ensure the destination directory exists --------------------------------- 1.10s 2026-01-07 01:04:22.793246 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-01-07 01:04:22.793249 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-07 01:04:22.793252 | orchestrator | 2026-01-07 01:04:22 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:22.794056 | orchestrator | 2026-01-07 01:04:22 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:22.794779 | orchestrator | 2026-01-07 01:04:22 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:22.794803 | orchestrator | 2026-01-07 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:25.823848 | orchestrator | 2026-01-07 01:04:25 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:25.824204 | orchestrator | 2026-01-07 01:04:25 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:25.824965 | orchestrator | 2026-01-07 01:04:25 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:25.825497 | orchestrator | 2026-01-07 01:04:25 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:25.825526 | orchestrator | 2026-01-07 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:28.848007 | orchestrator | 2026-01-07 01:04:28 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:28.849064 | orchestrator | 2026-01-07 01:04:28 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:28.850264 | orchestrator | 2026-01-07 01:04:28 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:28.851634 | orchestrator | 2026-01-07 01:04:28 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:28.851666 | orchestrator | 2026-01-07 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:31.937571 | orchestrator | 2026-01-07 01:04:31 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:31.938112 | orchestrator | 2026-01-07 01:04:31 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:31.938166 | orchestrator | 2026-01-07 01:04:31 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:31.938907 | orchestrator | 2026-01-07 01:04:31 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:31.938932 | orchestrator | 2026-01-07 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:34.970976 | orchestrator | 2026-01-07 01:04:34 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:34.972382 | orchestrator | 2026-01-07 01:04:34 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:34.973813 | orchestrator | 2026-01-07 01:04:34 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:34.975353 | orchestrator | 2026-01-07 01:04:34 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:34.975414 | orchestrator | 2026-01-07 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:38.009993 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:38.011308 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:38.013082 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:38.019907 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:38.019964 | orchestrator | 2026-01-07 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:41.057470 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:41.059132 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:41.060564 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:41.061997 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:41.062057 | orchestrator | 2026-01-07 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:44.088384 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:44.088581 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:44.090320 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:44.090939 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:44.090985 | orchestrator | 2026-01-07 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:47.114652 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:47.115010 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:47.117188 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:47.117760 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:47.118852 | orchestrator | 2026-01-07 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:50.154584 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:50.155080 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:50.156793 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:50.162133 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:50.162223 | orchestrator | 2026-01-07 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:53.186405 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:53.187741 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:53.188399 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:53.189136 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:53.189169 | orchestrator | 2026-01-07 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:56.218179 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:56.218873 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:56.219337 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:56.220301 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:56.220356 | orchestrator | 2026-01-07 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:59.246492 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:04:59.246863 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:04:59.247480 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:04:59.248079 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:04:59.248132 | orchestrator | 2026-01-07 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:02.278175 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:02.278701 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:05:02.279629 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:02.280395 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:02.282080 | orchestrator | 2026-01-07 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:05.304085 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:05.304571 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state STARTED 2026-01-07 01:05:05.307828 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:05.307866 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:05.307871 | orchestrator | 2026-01-07 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:08.333229 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:08.334967 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task b0bdb352-b415-4b79-9179-0c00eb28b18b is in state SUCCESS 2026-01-07 01:05:08.336125 | orchestrator | 2026-01-07 01:05:08.336163 | orchestrator | 2026-01-07 01:05:08.336172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:05:08.336181 | orchestrator | 2026-01-07 01:05:08.336188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:05:08.336195 | orchestrator | Wednesday 07 January 2026 01:02:12 +0000 (0:00:00.200) 0:00:00.200 ***** 2026-01-07 01:05:08.336202 | orchestrator | ok: [testbed-manager] 2026-01-07 01:05:08.336210 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:08.336217 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:08.336224 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:08.336248 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:05:08.336256 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:05:08.336262 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:05:08.336269 | orchestrator | 2026-01-07 01:05:08.336275 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:05:08.336282 | orchestrator | Wednesday 07 January 2026 01:02:13 +0000 (0:00:00.611) 0:00:00.812 ***** 2026-01-07 01:05:08.336289 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336296 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336311 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336318 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336325 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336375 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336385 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-07 01:05:08.336401 | orchestrator | 2026-01-07 01:05:08.336408 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-07 01:05:08.336415 | orchestrator | 2026-01-07 01:05:08.336422 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:05:08.336429 | orchestrator | Wednesday 07 January 2026 01:02:14 +0000 (0:00:00.522) 0:00:01.334 ***** 2026-01-07 01:05:08.336437 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:05:08.336445 | orchestrator | 2026-01-07 01:05:08.336452 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-07 01:05:08.336459 | orchestrator | Wednesday 07 January 2026 01:02:15 +0000 (0:00:01.065) 0:00:02.400 ***** 2026-01-07 01:05:08.336468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336535 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 01:05:08.336597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336692 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.336705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:05:08.336894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.336910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.336955 | orchestrator | 2026-01-07 01:05:08.336963 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:05:08.336970 | orchestrator | Wednesday 07 January 2026 01:02:17 +0000 (0:00:02.921) 0:00:05.321 ***** 2026-01-07 01:05:08.336977 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:05:08.336985 | orchestrator | 2026-01-07 01:05:08.336992 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-07 01:05:08.336999 | orchestrator | Wednesday 07 January 2026 01:02:19 +0000 (0:00:01.399) 0:00:06.721 ***** 2026-01-07 01:05:08.337006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 01:05:08.337027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.337136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337267 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:05:08.337280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.337794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337818 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.337845 | orchestrator | 2026-01-07 01:05:08.337853 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-07 01:05:08.337860 | orchestrator | Wednesday 07 January 2026 01:02:24 +0000 (0:00:04.811) 0:00:11.532 ***** 2026-01-07 01:05:08.337867 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 01:05:08.337881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.337889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.337919 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.337931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.337939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.337946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.337957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.337965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.337972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.337979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338055 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.338069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338090 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.338097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:05:08.338143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338174 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.338213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338237 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.338244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338267 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.338296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338312 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.338323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338342 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.338349 | orchestrator | 2026-01-07 01:05:08.338357 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-07 01:05:08.338364 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:02.792) 0:00:14.325 ***** 2026-01-07 01:05:08.338371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 01:05:08.338379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338438 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:05:08.338526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338561 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.338571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338580 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.338606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338639 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.338661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338675 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.338683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.338690 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.338698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.338740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338770 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.338778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.338786 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.338793 | orchestrator | 2026-01-07 01:05:08.338799 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-07 01:05:08.338805 | orchestrator | Wednesday 07 January 2026 01:02:30 +0000 (0:00:03.183) 0:00:17.509 ***** 2026-01-07 01:05:08.338811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338817 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 01:05:08.338845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.338895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.338929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.338936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.338943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.338950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.338980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.338990 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:05:08.338998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.339005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.339012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.339019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.339030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.339041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.339051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.339058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.339065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.339072 | orchestrator | 2026-01-07 01:05:08.339079 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-07 01:05:08.339086 | orchestrator | Wednesday 07 January 2026 01:02:35 +0000 (0:00:05.789) 0:00:23.298 ***** 2026-01-07 01:05:08.339092 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:05:08.339099 | orchestrator | 2026-01-07 01:05:08.339106 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-07 01:05:08.339113 | orchestrator | Wednesday 07 January 2026 01:02:36 +0000 (0:00:01.022) 0:00:24.321 ***** 2026-01-07 01:05:08.339119 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.339125 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.339132 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.339139 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.339145 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.339174 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.339181 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.339188 | orchestrator | 2026-01-07 01:05:08.339195 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-07 01:05:08.339202 | orchestrator | Wednesday 07 January 2026 01:02:37 +0000 (0:00:00.595) 0:00:24.916 ***** 2026-01-07 01:05:08.339209 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:05:08.339215 | orchestrator | 2026-01-07 01:05:08.339222 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-07 01:05:08.339229 | orchestrator | Wednesday 07 January 2026 01:02:38 +0000 (0:00:00.662) 0:00:25.579 ***** 2026-01-07 01:05:08.339237 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339251 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339258 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339266 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339273 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:05:08.339280 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339294 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339307 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339314 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 01:05:08.339320 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339327 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339334 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339346 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339353 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:05:08.339360 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339379 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339386 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339393 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339399 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339406 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339413 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339420 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339427 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339434 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339448 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339458 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339466 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339473 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.339480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339487 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-07 01:05:08.339494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:05:08.339505 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-07 01:05:08.339512 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 01:05:08.339519 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:05:08.339526 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:05:08.339532 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:05:08.339539 | orchestrator | 2026-01-07 01:05:08.339546 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-07 01:05:08.339553 | orchestrator | Wednesday 07 January 2026 01:02:40 +0000 (0:00:02.428) 0:00:28.008 ***** 2026-01-07 01:05:08.339560 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339567 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.339574 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339581 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.339588 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339595 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.339602 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339609 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.339616 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339622 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.339629 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:05:08.339636 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.339643 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-07 01:05:08.339668 | orchestrator | 2026-01-07 01:05:08.339675 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-07 01:05:08.339682 | orchestrator | Wednesday 07 January 2026 01:02:55 +0000 (0:00:14.333) 0:00:42.341 ***** 2026-01-07 01:05:08.339689 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339697 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339704 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.339711 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.339717 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339724 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.339731 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339738 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.339746 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339752 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.339759 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:05:08.339766 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.339773 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-07 01:05:08.339780 | orchestrator | 2026-01-07 01:05:08.339786 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-07 01:05:08.339793 | orchestrator | Wednesday 07 January 2026 01:02:58 +0000 (0:00:03.892) 0:00:46.234 ***** 2026-01-07 01:05:08.339799 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339806 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.339818 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339830 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339837 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.339843 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.339850 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339857 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.339864 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339870 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.339877 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-07 01:05:08.339889 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:05:08.339896 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.339902 | orchestrator | 2026-01-07 01:05:08.339910 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-07 01:05:08.339916 | orchestrator | Wednesday 07 January 2026 01:03:00 +0000 (0:00:01.870) 0:00:48.104 ***** 2026-01-07 01:05:08.339924 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:05:08.339931 | orchestrator | 2026-01-07 01:05:08.339938 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-07 01:05:08.339945 | orchestrator | Wednesday 07 January 2026 01:03:01 +0000 (0:00:00.763) 0:00:48.867 ***** 2026-01-07 01:05:08.339952 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.339959 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.339966 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.339973 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.339980 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.339987 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.339994 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340001 | orchestrator | 2026-01-07 01:05:08.340008 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-07 01:05:08.340015 | orchestrator | Wednesday 07 January 2026 01:03:02 +0000 (0:00:00.722) 0:00:49.590 ***** 2026-01-07 01:05:08.340022 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.340029 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.340036 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.340042 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340049 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.340055 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.340062 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.340068 | orchestrator | 2026-01-07 01:05:08.340075 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-07 01:05:08.340082 | orchestrator | Wednesday 07 January 2026 01:03:04 +0000 (0:00:02.651) 0:00:52.241 ***** 2026-01-07 01:05:08.340088 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340095 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340102 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.340109 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340115 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.340121 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.340128 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340134 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.340146 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340153 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.340160 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340166 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.340173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:05:08.340180 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340186 | orchestrator | 2026-01-07 01:05:08.340192 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-07 01:05:08.340199 | orchestrator | Wednesday 07 January 2026 01:03:06 +0000 (0:00:01.668) 0:00:53.910 ***** 2026-01-07 01:05:08.340206 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340212 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340219 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.340226 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.340232 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340239 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340245 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.340252 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.340259 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340265 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.340277 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:05:08.340284 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340291 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-07 01:05:08.340297 | orchestrator | 2026-01-07 01:05:08.340304 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-07 01:05:08.340311 | orchestrator | Wednesday 07 January 2026 01:03:08 +0000 (0:00:01.432) 0:00:55.342 ***** 2026-01-07 01:05:08.340318 | orchestrator | [WARNING]: Skipped 2026-01-07 01:05:08.340325 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-07 01:05:08.340332 | orchestrator | due to this access issue: 2026-01-07 01:05:08.340338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-07 01:05:08.340345 | orchestrator | not a directory 2026-01-07 01:05:08.340352 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:05:08.340359 | orchestrator | 2026-01-07 01:05:08.340371 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-07 01:05:08.340378 | orchestrator | Wednesday 07 January 2026 01:03:09 +0000 (0:00:01.079) 0:00:56.422 ***** 2026-01-07 01:05:08.340385 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.340392 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.340399 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.340406 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.340413 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.340420 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.340427 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340434 | orchestrator | 2026-01-07 01:05:08.340441 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-07 01:05:08.340447 | orchestrator | Wednesday 07 January 2026 01:03:10 +0000 (0:00:00.988) 0:00:57.411 ***** 2026-01-07 01:05:08.340454 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.340466 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.340474 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.340481 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.340488 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.340495 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.340502 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.340509 | orchestrator | 2026-01-07 01:05:08.340516 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-07 01:05:08.340523 | orchestrator | Wednesday 07 January 2026 01:03:10 +0000 (0:00:00.872) 0:00:58.284 ***** 2026-01-07 01:05:08.340531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-07 01:05:08.340561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:05:08.340607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340632 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:05:08.340717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340754 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:05:08.340787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:05:08.340815 | orchestrator | 2026-01-07 01:05:08.340823 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-07 01:05:08.340830 | orchestrator | Wednesday 07 January 2026 01:03:15 +0000 (0:00:04.438) 0:01:02.722 ***** 2026-01-07 01:05:08.340837 | orchestrator | changed: [testbed-manager] => { 2026-01-07 01:05:08.340844 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340851 | orchestrator | } 2026-01-07 01:05:08.340857 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:05:08.340864 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340871 | orchestrator | } 2026-01-07 01:05:08.340878 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:05:08.340885 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340892 | orchestrator | } 2026-01-07 01:05:08.340899 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:05:08.340906 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340913 | orchestrator | } 2026-01-07 01:05:08.340920 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 01:05:08.340927 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340934 | orchestrator | } 2026-01-07 01:05:08.340941 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 01:05:08.340948 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340955 | orchestrator | } 2026-01-07 01:05:08.340962 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 01:05:08.340969 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:05:08.340976 | orchestrator | } 2026-01-07 01:05:08.340983 | orchestrator | 2026-01-07 01:05:08.340990 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:05:08.340998 | orchestrator | Wednesday 07 January 2026 01:03:16 +0000 (0:00:00.784) 0:01:03.507 ***** 2026-01-07 01:05:08.341006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-07 01:05:08.341022 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341041 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:05:08.341049 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341144 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.341151 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:08.341158 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:08.341165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:05:08.341211 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:08.341218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341245 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:05:08.341253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341282 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:05:08.341289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:05:08.341296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:05:08.341310 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:05:08.341323 | orchestrator | 2026-01-07 01:05:08.341329 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-07 01:05:08.341336 | orchestrator | Wednesday 07 January 2026 01:03:17 +0000 (0:00:01.645) 0:01:05.153 ***** 2026-01-07 01:05:08.341344 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 01:05:08.341366 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:05:08.341373 | orchestrator | 2026-01-07 01:05:08.341380 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341386 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:01.014) 0:01:06.167 ***** 2026-01-07 01:05:08.341393 | orchestrator | 2026-01-07 01:05:08.341399 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341406 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:00.059) 0:01:06.227 ***** 2026-01-07 01:05:08.341412 | orchestrator | 2026-01-07 01:05:08.341419 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341426 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:00.057) 0:01:06.285 ***** 2026-01-07 01:05:08.341432 | orchestrator | 2026-01-07 01:05:08.341439 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341445 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.057) 0:01:06.342 ***** 2026-01-07 01:05:08.341452 | orchestrator | 2026-01-07 01:05:08.341458 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341464 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.059) 0:01:06.402 ***** 2026-01-07 01:05:08.341471 | orchestrator | 2026-01-07 01:05:08.341477 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341484 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.059) 0:01:06.462 ***** 2026-01-07 01:05:08.341491 | orchestrator | 2026-01-07 01:05:08.341497 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:05:08.341504 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.207) 0:01:06.670 ***** 2026-01-07 01:05:08.341510 | orchestrator | 2026-01-07 01:05:08.341517 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-07 01:05:08.341528 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.089) 0:01:06.759 ***** 2026-01-07 01:05:08.341535 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:08.341541 | orchestrator | 2026-01-07 01:05:08.341548 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-07 01:05:08.341554 | orchestrator | Wednesday 07 January 2026 01:03:42 +0000 (0:00:22.947) 0:01:29.707 ***** 2026-01-07 01:05:08.341561 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.341567 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:05:08.341603 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:05:08.341611 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.341618 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.341625 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:08.341631 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:05:08.341638 | orchestrator | 2026-01-07 01:05:08.341658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-07 01:05:08.341666 | orchestrator | Wednesday 07 January 2026 01:03:54 +0000 (0:00:11.901) 0:01:41.608 ***** 2026-01-07 01:05:08.341672 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.341679 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.341685 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.341692 | orchestrator | 2026-01-07 01:05:08.341707 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-07 01:05:08.341714 | orchestrator | Wednesday 07 January 2026 01:04:00 +0000 (0:00:05.845) 0:01:47.454 ***** 2026-01-07 01:05:08.341720 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.341726 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.341733 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.341739 | orchestrator | 2026-01-07 01:05:08.341752 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-07 01:05:08.341759 | orchestrator | Wednesday 07 January 2026 01:04:09 +0000 (0:00:09.825) 0:01:57.280 ***** 2026-01-07 01:05:08.341766 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.341772 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:08.341779 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.341786 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.341792 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:05:08.341798 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:05:08.341804 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:05:08.341810 | orchestrator | 2026-01-07 01:05:08.341817 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-07 01:05:08.341824 | orchestrator | Wednesday 07 January 2026 01:04:23 +0000 (0:00:13.789) 0:02:11.070 ***** 2026-01-07 01:05:08.341831 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:08.341838 | orchestrator | 2026-01-07 01:05:08.341845 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-07 01:05:08.341852 | orchestrator | Wednesday 07 January 2026 01:04:35 +0000 (0:00:11.561) 0:02:22.631 ***** 2026-01-07 01:05:08.341858 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:08.341865 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:08.341872 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:08.341879 | orchestrator | 2026-01-07 01:05:08.341886 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-07 01:05:08.341893 | orchestrator | Wednesday 07 January 2026 01:04:45 +0000 (0:00:10.098) 0:02:32.730 ***** 2026-01-07 01:05:08.341900 | orchestrator | changed: [testbed-manager] 2026-01-07 01:05:08.341907 | orchestrator | 2026-01-07 01:05:08.341914 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-07 01:05:08.341921 | orchestrator | Wednesday 07 January 2026 01:04:55 +0000 (0:00:10.268) 0:02:42.999 ***** 2026-01-07 01:05:08.341928 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:05:08.341934 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:05:08.341941 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:05:08.341947 | orchestrator | 2026-01-07 01:05:08.341954 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:08.341961 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-07 01:05:08.341969 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:05:08.341976 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:05:08.341983 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:05:08.341990 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-07 01:05:08.341997 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-07 01:05:08.342005 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-07 01:05:08.342044 | orchestrator | 2026-01-07 01:05:08.342055 | orchestrator | 2026-01-07 01:05:08.342062 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:08.342069 | orchestrator | Wednesday 07 January 2026 01:05:07 +0000 (0:00:11.342) 0:02:54.341 ***** 2026-01-07 01:05:08.342076 | orchestrator | =============================================================================== 2026-01-07 01:05:08.342089 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.95s 2026-01-07 01:05:08.342096 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.33s 2026-01-07 01:05:08.342110 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.79s 2026-01-07 01:05:08.342118 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.90s 2026-01-07 01:05:08.342125 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.56s 2026-01-07 01:05:08.342132 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.34s 2026-01-07 01:05:08.342138 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.27s 2026-01-07 01:05:08.342145 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.10s 2026-01-07 01:05:08.342152 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.83s 2026-01-07 01:05:08.342159 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.85s 2026-01-07 01:05:08.342166 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.79s 2026-01-07 01:05:08.342173 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.81s 2026-01-07 01:05:08.342184 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.44s 2026-01-07 01:05:08.342191 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.89s 2026-01-07 01:05:08.342197 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.18s 2026-01-07 01:05:08.342203 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.92s 2026-01-07 01:05:08.342210 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.79s 2026-01-07 01:05:08.342217 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.65s 2026-01-07 01:05:08.342223 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.43s 2026-01-07 01:05:08.342230 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.87s 2026-01-07 01:05:08.342237 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:08.342244 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:08.342251 | orchestrator | 2026-01-07 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:11.377587 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:11.377940 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:11.380868 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:11.380915 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:11.380921 | orchestrator | 2026-01-07 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:14.418757 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:14.420742 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:14.422310 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:14.424303 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:14.426864 | orchestrator | 2026-01-07 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:17.483066 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:17.484109 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:17.485428 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:17.486964 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:17.487012 | orchestrator | 2026-01-07 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:20.539847 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:20.541161 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:20.542215 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:20.543506 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:20.543549 | orchestrator | 2026-01-07 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:23.604024 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:23.605872 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:23.608096 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:23.610648 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:23.610758 | orchestrator | 2026-01-07 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:26.653256 | orchestrator | 2026-01-07 01:05:26 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:26.656092 | orchestrator | 2026-01-07 01:05:26 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:26.658066 | orchestrator | 2026-01-07 01:05:26 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:26.659843 | orchestrator | 2026-01-07 01:05:26 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:26.659890 | orchestrator | 2026-01-07 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:29.704426 | orchestrator | 2026-01-07 01:05:29 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:29.706172 | orchestrator | 2026-01-07 01:05:29 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:29.708335 | orchestrator | 2026-01-07 01:05:29 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:29.709702 | orchestrator | 2026-01-07 01:05:29 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:29.709762 | orchestrator | 2026-01-07 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:32.752669 | orchestrator | 2026-01-07 01:05:32 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:32.754987 | orchestrator | 2026-01-07 01:05:32 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:32.757130 | orchestrator | 2026-01-07 01:05:32 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:32.758898 | orchestrator | 2026-01-07 01:05:32 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:32.758993 | orchestrator | 2026-01-07 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:35.801418 | orchestrator | 2026-01-07 01:05:35 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:35.803745 | orchestrator | 2026-01-07 01:05:35 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:35.806168 | orchestrator | 2026-01-07 01:05:35 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:35.807957 | orchestrator | 2026-01-07 01:05:35 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:35.808060 | orchestrator | 2026-01-07 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:38.858668 | orchestrator | 2026-01-07 01:05:38 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:38.860580 | orchestrator | 2026-01-07 01:05:38 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:38.862243 | orchestrator | 2026-01-07 01:05:38 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:38.863934 | orchestrator | 2026-01-07 01:05:38 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:38.863984 | orchestrator | 2026-01-07 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:41.911447 | orchestrator | 2026-01-07 01:05:41 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:41.914596 | orchestrator | 2026-01-07 01:05:41 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:41.916007 | orchestrator | 2026-01-07 01:05:41 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:41.917105 | orchestrator | 2026-01-07 01:05:41 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:41.917145 | orchestrator | 2026-01-07 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:44.961804 | orchestrator | 2026-01-07 01:05:44 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:44.964461 | orchestrator | 2026-01-07 01:05:44 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:44.966536 | orchestrator | 2026-01-07 01:05:44 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:44.970203 | orchestrator | 2026-01-07 01:05:44 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:44.970791 | orchestrator | 2026-01-07 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:48.035313 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:48.037700 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:48.039700 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:48.041813 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:48.041853 | orchestrator | 2026-01-07 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:51.084345 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:51.085180 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:51.086131 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:51.090208 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:51.090770 | orchestrator | 2026-01-07 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:54.134183 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:54.134246 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:54.134262 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:54.134270 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:54.134277 | orchestrator | 2026-01-07 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:57.218480 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:05:57.219894 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:05:57.221468 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:05:57.223387 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:05:57.223419 | orchestrator | 2026-01-07 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:00.268377 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:00.269018 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:00.269756 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:00.270550 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:00.270567 | orchestrator | 2026-01-07 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:03.307194 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:03.309971 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:03.313191 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:03.315544 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:03.315615 | orchestrator | 2026-01-07 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:06.357574 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:06.357634 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:06.357643 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:06.358512 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:06.358562 | orchestrator | 2026-01-07 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:09.401950 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:09.403607 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:09.407248 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:09.408757 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:09.408859 | orchestrator | 2026-01-07 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:12.452379 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:12.454885 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:12.456493 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:12.459552 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:12.459612 | orchestrator | 2026-01-07 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:15.504803 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state STARTED 2026-01-07 01:06:15.505051 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:15.509509 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:15.511672 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:15.511726 | orchestrator | 2026-01-07 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:18.538706 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task db22fd98-895f-4a77-8555-c04bc1b0fdc8 is in state SUCCESS 2026-01-07 01:06:18.539888 | orchestrator | 2026-01-07 01:06:18.539939 | orchestrator | 2026-01-07 01:06:18.539946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:06:18.539953 | orchestrator | 2026-01-07 01:06:18.539958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:06:18.539964 | orchestrator | Wednesday 07 January 2026 01:03:35 +0000 (0:00:00.313) 0:00:00.313 ***** 2026-01-07 01:06:18.539970 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:18.539976 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:18.539981 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:18.539986 | orchestrator | 2026-01-07 01:06:18.539991 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:06:18.539996 | orchestrator | Wednesday 07 January 2026 01:03:36 +0000 (0:00:00.324) 0:00:00.637 ***** 2026-01-07 01:06:18.540001 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-07 01:06:18.540006 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-07 01:06:18.540011 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-07 01:06:18.540016 | orchestrator | 2026-01-07 01:06:18.540021 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-07 01:06:18.540025 | orchestrator | 2026-01-07 01:06:18.540030 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:06:18.540077 | orchestrator | Wednesday 07 January 2026 01:03:36 +0000 (0:00:00.606) 0:00:01.243 ***** 2026-01-07 01:06:18.540084 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:18.540089 | orchestrator | 2026-01-07 01:06:18.540094 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-01-07 01:06:18.540099 | orchestrator | Wednesday 07 January 2026 01:03:37 +0000 (0:00:00.542) 0:00:01.786 ***** 2026-01-07 01:06:18.540119 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-07 01:06:18.540274 | orchestrator | 2026-01-07 01:06:18.540279 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-01-07 01:06:18.540284 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:04.236) 0:00:06.022 ***** 2026-01-07 01:06:18.540289 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-07 01:06:18.540295 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-07 01:06:18.540299 | orchestrator | 2026-01-07 01:06:18.540304 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-07 01:06:18.540309 | orchestrator | Wednesday 07 January 2026 01:03:49 +0000 (0:00:07.582) 0:00:13.605 ***** 2026-01-07 01:06:18.540314 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:06:18.540320 | orchestrator | 2026-01-07 01:06:18.540324 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-07 01:06:18.540329 | orchestrator | Wednesday 07 January 2026 01:03:52 +0000 (0:00:03.164) 0:00:16.770 ***** 2026-01-07 01:06:18.540334 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:06:18.540340 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-07 01:06:18.540345 | orchestrator | 2026-01-07 01:06:18.540350 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-07 01:06:18.540355 | orchestrator | Wednesday 07 January 2026 01:03:55 +0000 (0:00:03.654) 0:00:20.424 ***** 2026-01-07 01:06:18.540360 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:06:18.540365 | orchestrator | 2026-01-07 01:06:18.540370 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-01-07 01:06:18.540374 | orchestrator | Wednesday 07 January 2026 01:03:59 +0000 (0:00:04.057) 0:00:24.482 ***** 2026-01-07 01:06:18.540380 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-07 01:06:18.540385 | orchestrator | 2026-01-07 01:06:18.540398 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-07 01:06:18.540403 | orchestrator | Wednesday 07 January 2026 01:04:04 +0000 (0:00:04.439) 0:00:28.921 ***** 2026-01-07 01:06:18.540422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540450 | orchestrator | 2026-01-07 01:06:18.540455 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:06:18.540460 | orchestrator | Wednesday 07 January 2026 01:04:07 +0000 (0:00:02.955) 0:00:31.877 ***** 2026-01-07 01:06:18.540469 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:18.540474 | orchestrator | 2026-01-07 01:06:18.540479 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-07 01:06:18.540487 | orchestrator | Wednesday 07 January 2026 01:04:07 +0000 (0:00:00.637) 0:00:32.515 ***** 2026-01-07 01:06:18.540492 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:18.540497 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.540502 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:18.540507 | orchestrator | 2026-01-07 01:06:18.540512 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-07 01:06:18.540518 | orchestrator | Wednesday 07 January 2026 01:04:11 +0000 (0:00:04.019) 0:00:36.535 ***** 2026-01-07 01:06:18.540523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540544 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540549 | orchestrator | 2026-01-07 01:06:18.540554 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-07 01:06:18.540559 | orchestrator | Wednesday 07 January 2026 01:04:14 +0000 (0:00:02.142) 0:00:38.677 ***** 2026-01-07 01:06:18.540564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540569 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-07 01:06:18.540579 | orchestrator | 2026-01-07 01:06:18.540584 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:06:18.540589 | orchestrator | Wednesday 07 January 2026 01:04:15 +0000 (0:00:01.276) 0:00:39.953 ***** 2026-01-07 01:06:18.540594 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:18.540599 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:18.540604 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:18.540609 | orchestrator | 2026-01-07 01:06:18.540614 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-07 01:06:18.540619 | orchestrator | Wednesday 07 January 2026 01:04:16 +0000 (0:00:00.772) 0:00:40.726 ***** 2026-01-07 01:06:18.540624 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540629 | orchestrator | 2026-01-07 01:06:18.540635 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-07 01:06:18.540640 | orchestrator | Wednesday 07 January 2026 01:04:16 +0000 (0:00:00.121) 0:00:40.848 ***** 2026-01-07 01:06:18.540645 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540650 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.540655 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.540660 | orchestrator | 2026-01-07 01:06:18.540665 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:06:18.540670 | orchestrator | Wednesday 07 January 2026 01:04:16 +0000 (0:00:00.244) 0:00:41.092 ***** 2026-01-07 01:06:18.540675 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:18.540680 | orchestrator | 2026-01-07 01:06:18.540685 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-07 01:06:18.540692 | orchestrator | Wednesday 07 January 2026 01:04:17 +0000 (0:00:00.492) 0:00:41.584 ***** 2026-01-07 01:06:18.540702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540729 | orchestrator | 2026-01-07 01:06:18.540734 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-07 01:06:18.540739 | orchestrator | Wednesday 07 January 2026 01:04:21 +0000 (0:00:04.226) 0:00:45.811 ***** 2026-01-07 01:06:18.540748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540756 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540773 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.540782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540788 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.540793 | orchestrator | 2026-01-07 01:06:18.540798 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-07 01:06:18.540803 | orchestrator | Wednesday 07 January 2026 01:04:24 +0000 (0:00:03.475) 0:00:49.286 ***** 2026-01-07 01:06:18.540811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540819 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.540828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540834 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.540840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.540845 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540853 | orchestrator | 2026-01-07 01:06:18.540858 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-07 01:06:18.540868 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:03.827) 0:00:53.114 ***** 2026-01-07 01:06:18.540874 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.540879 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540884 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.540889 | orchestrator | 2026-01-07 01:06:18.540894 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-07 01:06:18.540899 | orchestrator | Wednesday 07 January 2026 01:04:32 +0000 (0:00:03.895) 0:00:57.009 ***** 2026-01-07 01:06:18.540907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.540931 | orchestrator | 2026-01-07 01:06:18.540939 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-07 01:06:18.540944 | orchestrator | Wednesday 07 January 2026 01:04:36 +0000 (0:00:03.749) 0:01:00.759 ***** 2026-01-07 01:06:18.540949 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:18.540954 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.540959 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:18.540965 | orchestrator | 2026-01-07 01:06:18.540970 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-07 01:06:18.540974 | orchestrator | Wednesday 07 January 2026 01:04:41 +0000 (0:00:05.485) 0:01:06.245 ***** 2026-01-07 01:06:18.540979 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.540984 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.540989 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.540993 | orchestrator | 2026-01-07 01:06:18.540998 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-07 01:06:18.541003 | orchestrator | Wednesday 07 January 2026 01:04:44 +0000 (0:00:03.110) 0:01:09.356 ***** 2026-01-07 01:06:18.541008 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541012 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541018 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541023 | orchestrator | 2026-01-07 01:06:18.541027 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-07 01:06:18.541033 | orchestrator | Wednesday 07 January 2026 01:04:48 +0000 (0:00:03.581) 0:01:12.937 ***** 2026-01-07 01:06:18.541038 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541043 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541048 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541052 | orchestrator | 2026-01-07 01:06:18.541057 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-07 01:06:18.541062 | orchestrator | Wednesday 07 January 2026 01:04:51 +0000 (0:00:03.368) 0:01:16.305 ***** 2026-01-07 01:06:18.541066 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541071 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541079 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541083 | orchestrator | 2026-01-07 01:06:18.541088 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-07 01:06:18.541094 | orchestrator | Wednesday 07 January 2026 01:04:52 +0000 (0:00:00.268) 0:01:16.574 ***** 2026-01-07 01:06:18.541099 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:06:18.541104 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541109 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:06:18.541115 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541120 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:06:18.541125 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541131 | orchestrator | 2026-01-07 01:06:18.541134 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-07 01:06:18.541137 | orchestrator | Wednesday 07 January 2026 01:04:55 +0000 (0:00:03.017) 0:01:19.591 ***** 2026-01-07 01:06:18.541140 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541144 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:18.541147 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:18.541150 | orchestrator | 2026-01-07 01:06:18.541155 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-01-07 01:06:18.541160 | orchestrator | Wednesday 07 January 2026 01:05:00 +0000 (0:00:05.669) 0:01:25.261 ***** 2026-01-07 01:06:18.541172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.541195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.541209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:06:18.541215 | orchestrator | 2026-01-07 01:06:18.541220 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-01-07 01:06:18.541226 | orchestrator | Wednesday 07 January 2026 01:05:04 +0000 (0:00:04.140) 0:01:29.402 ***** 2026-01-07 01:06:18.541230 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:06:18.541234 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:18.541237 | orchestrator | } 2026-01-07 01:06:18.541240 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:06:18.541243 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:18.541246 | orchestrator | } 2026-01-07 01:06:18.541249 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:06:18.541252 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:18.541255 | orchestrator | } 2026-01-07 01:06:18.541259 | orchestrator | 2026-01-07 01:06:18.541262 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:06:18.541267 | orchestrator | Wednesday 07 January 2026 01:05:05 +0000 (0:00:00.289) 0:01:29.692 ***** 2026-01-07 01:06:18.541271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.541278 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.541287 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:06:18.541300 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541303 | orchestrator | 2026-01-07 01:06:18.541306 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:06:18.541309 | orchestrator | Wednesday 07 January 2026 01:05:08 +0000 (0:00:03.812) 0:01:33.504 ***** 2026-01-07 01:06:18.541312 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:18.541315 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:18.541318 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:18.541321 | orchestrator | 2026-01-07 01:06:18.541325 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-07 01:06:18.541328 | orchestrator | Wednesday 07 January 2026 01:05:09 +0000 (0:00:00.492) 0:01:33.997 ***** 2026-01-07 01:06:18.541331 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541334 | orchestrator | 2026-01-07 01:06:18.541337 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-07 01:06:18.541340 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:02.062) 0:01:36.060 ***** 2026-01-07 01:06:18.541343 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541346 | orchestrator | 2026-01-07 01:06:18.541349 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-07 01:06:18.541352 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:02.092) 0:01:38.152 ***** 2026-01-07 01:06:18.541355 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541358 | orchestrator | 2026-01-07 01:06:18.541361 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-07 01:06:18.541364 | orchestrator | Wednesday 07 January 2026 01:05:15 +0000 (0:00:02.012) 0:01:40.164 ***** 2026-01-07 01:06:18.541367 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541371 | orchestrator | 2026-01-07 01:06:18.541374 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-07 01:06:18.541377 | orchestrator | Wednesday 07 January 2026 01:05:42 +0000 (0:00:27.189) 0:02:07.354 ***** 2026-01-07 01:06:18.541380 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541383 | orchestrator | 2026-01-07 01:06:18.541386 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:06:18.541391 | orchestrator | Wednesday 07 January 2026 01:05:44 +0000 (0:00:02.102) 0:02:09.457 ***** 2026-01-07 01:06:18.541394 | orchestrator | 2026-01-07 01:06:18.541397 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:06:18.541401 | orchestrator | Wednesday 07 January 2026 01:05:44 +0000 (0:00:00.091) 0:02:09.548 ***** 2026-01-07 01:06:18.541404 | orchestrator | 2026-01-07 01:06:18.541407 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:06:18.541410 | orchestrator | Wednesday 07 January 2026 01:05:45 +0000 (0:00:00.092) 0:02:09.641 ***** 2026-01-07 01:06:18.541413 | orchestrator | 2026-01-07 01:06:18.541416 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-07 01:06:18.541422 | orchestrator | Wednesday 07 January 2026 01:05:45 +0000 (0:00:00.094) 0:02:09.736 ***** 2026-01-07 01:06:18.541425 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:18.541428 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:18.541431 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:18.541434 | orchestrator | 2026-01-07 01:06:18.541437 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:06:18.541441 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:06:18.541445 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:06:18.541448 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:06:18.541451 | orchestrator | 2026-01-07 01:06:18.541454 | orchestrator | 2026-01-07 01:06:18.541457 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:06:18.541460 | orchestrator | Wednesday 07 January 2026 01:06:16 +0000 (0:00:31.027) 0:02:40.763 ***** 2026-01-07 01:06:18.541465 | orchestrator | =============================================================================== 2026-01-07 01:06:18.541468 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.03s 2026-01-07 01:06:18.541471 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.19s 2026-01-07 01:06:18.541474 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 7.58s 2026-01-07 01:06:18.541478 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.67s 2026-01-07 01:06:18.541481 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.49s 2026-01-07 01:06:18.541484 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.44s 2026-01-07 01:06:18.541487 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.24s 2026-01-07 01:06:18.541490 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.23s 2026-01-07 01:06:18.541493 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.14s 2026-01-07 01:06:18.541496 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.06s 2026-01-07 01:06:18.541500 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.02s 2026-01-07 01:06:18.541503 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.90s 2026-01-07 01:06:18.541506 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.83s 2026-01-07 01:06:18.541554 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.81s 2026-01-07 01:06:18.541559 | orchestrator | glance : Copying over config.json files for services -------------------- 3.75s 2026-01-07 01:06:18.541562 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.65s 2026-01-07 01:06:18.541565 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.58s 2026-01-07 01:06:18.541568 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.48s 2026-01-07 01:06:18.541571 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.37s 2026-01-07 01:06:18.541574 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.16s 2026-01-07 01:06:18.541578 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:18.541585 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:18.543285 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:18.545500 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:18.545561 | orchestrator | 2026-01-07 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:21.589349 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:21.590875 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:21.592234 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:21.593878 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:21.593916 | orchestrator | 2026-01-07 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:24.628264 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:24.628945 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:24.629860 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:24.630789 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:24.630816 | orchestrator | 2026-01-07 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:27.662975 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:27.663864 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:27.664838 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:27.665711 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:27.665861 | orchestrator | 2026-01-07 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:30.710348 | orchestrator | 2026-01-07 01:06:30 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:30.712674 | orchestrator | 2026-01-07 01:06:30 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:30.715170 | orchestrator | 2026-01-07 01:06:30 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:30.717346 | orchestrator | 2026-01-07 01:06:30 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:30.717386 | orchestrator | 2026-01-07 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:33.761694 | orchestrator | 2026-01-07 01:06:33 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:33.763340 | orchestrator | 2026-01-07 01:06:33 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:33.764854 | orchestrator | 2026-01-07 01:06:33 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:33.766536 | orchestrator | 2026-01-07 01:06:33 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:33.766587 | orchestrator | 2026-01-07 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:36.807019 | orchestrator | 2026-01-07 01:06:36 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:36.808772 | orchestrator | 2026-01-07 01:06:36 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:36.810098 | orchestrator | 2026-01-07 01:06:36 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:36.811952 | orchestrator | 2026-01-07 01:06:36 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:36.812007 | orchestrator | 2026-01-07 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:39.847310 | orchestrator | 2026-01-07 01:06:39 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:39.847399 | orchestrator | 2026-01-07 01:06:39 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:39.848119 | orchestrator | 2026-01-07 01:06:39 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:39.849353 | orchestrator | 2026-01-07 01:06:39 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:39.849383 | orchestrator | 2026-01-07 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:42.895774 | orchestrator | 2026-01-07 01:06:42 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:42.897579 | orchestrator | 2026-01-07 01:06:42 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:42.899477 | orchestrator | 2026-01-07 01:06:42 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:42.900602 | orchestrator | 2026-01-07 01:06:42 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:42.900713 | orchestrator | 2026-01-07 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:45.947549 | orchestrator | 2026-01-07 01:06:45 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:45.949199 | orchestrator | 2026-01-07 01:06:45 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:45.951043 | orchestrator | 2026-01-07 01:06:45 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:45.952405 | orchestrator | 2026-01-07 01:06:45 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:45.952441 | orchestrator | 2026-01-07 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:48.990938 | orchestrator | 2026-01-07 01:06:48 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:48.991761 | orchestrator | 2026-01-07 01:06:48 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:48.993153 | orchestrator | 2026-01-07 01:06:48 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:48.994484 | orchestrator | 2026-01-07 01:06:48 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state STARTED 2026-01-07 01:06:48.994516 | orchestrator | 2026-01-07 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:52.043580 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:52.046544 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:52.049393 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:52.054273 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 011b3051-0a57-453c-b363-d54b733867bc is in state SUCCESS 2026-01-07 01:06:52.055832 | orchestrator | 2026-01-07 01:06:52.055894 | orchestrator | 2026-01-07 01:06:52.055902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:06:52.055931 | orchestrator | 2026-01-07 01:06:52.055938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:06:52.055941 | orchestrator | Wednesday 07 January 2026 01:03:40 +0000 (0:00:00.252) 0:00:00.252 ***** 2026-01-07 01:06:52.055945 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:52.055950 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:52.055955 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:52.055960 | orchestrator | 2026-01-07 01:06:52.055965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:06:52.055970 | orchestrator | Wednesday 07 January 2026 01:03:40 +0000 (0:00:00.300) 0:00:00.552 ***** 2026-01-07 01:06:52.055975 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-07 01:06:52.055980 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-07 01:06:52.055986 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-07 01:06:52.055991 | orchestrator | 2026-01-07 01:06:52.055996 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-07 01:06:52.056001 | orchestrator | 2026-01-07 01:06:52.056007 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:06:52.056013 | orchestrator | Wednesday 07 January 2026 01:03:40 +0000 (0:00:00.420) 0:00:00.972 ***** 2026-01-07 01:06:52.056018 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:52.056024 | orchestrator | 2026-01-07 01:06:52.056029 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-01-07 01:06:52.056037 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.520) 0:00:01.493 ***** 2026-01-07 01:06:52.056045 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-01-07 01:06:52.056050 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-07 01:06:52.056056 | orchestrator | 2026-01-07 01:06:52.056061 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-01-07 01:06:52.056066 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:07.252) 0:00:08.745 ***** 2026-01-07 01:06:52.056071 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-01-07 01:06:52.056076 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-01-07 01:06:52.056081 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-07 01:06:52.056087 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-07 01:06:52.056092 | orchestrator | 2026-01-07 01:06:52.056097 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-07 01:06:52.056102 | orchestrator | Wednesday 07 January 2026 01:04:02 +0000 (0:00:14.116) 0:00:22.861 ***** 2026-01-07 01:06:52.056107 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:06:52.056112 | orchestrator | 2026-01-07 01:06:52.056118 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-07 01:06:52.056123 | orchestrator | Wednesday 07 January 2026 01:04:06 +0000 (0:00:03.330) 0:00:26.192 ***** 2026-01-07 01:06:52.056128 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:06:52.056144 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-07 01:06:52.056150 | orchestrator | 2026-01-07 01:06:52.056156 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-07 01:06:52.056161 | orchestrator | Wednesday 07 January 2026 01:04:10 +0000 (0:00:04.072) 0:00:30.264 ***** 2026-01-07 01:06:52.056166 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:06:52.056171 | orchestrator | 2026-01-07 01:06:52.056176 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-01-07 01:06:52.056181 | orchestrator | Wednesday 07 January 2026 01:04:14 +0000 (0:00:04.064) 0:00:34.329 ***** 2026-01-07 01:06:52.056192 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-07 01:06:52.056198 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-07 01:06:52.056203 | orchestrator | 2026-01-07 01:06:52.056208 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-07 01:06:52.056214 | orchestrator | Wednesday 07 January 2026 01:04:21 +0000 (0:00:07.695) 0:00:42.024 ***** 2026-01-07 01:06:52.056335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.056677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.056693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.056745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.056822 | orchestrator | 2026-01-07 01:06:52.056829 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:06:52.056833 | orchestrator | Wednesday 07 January 2026 01:04:24 +0000 (0:00:02.448) 0:00:44.473 ***** 2026-01-07 01:06:52.056837 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.056840 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.056844 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.056847 | orchestrator | 2026-01-07 01:06:52.056850 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:06:52.056853 | orchestrator | Wednesday 07 January 2026 01:04:24 +0000 (0:00:00.321) 0:00:44.794 ***** 2026-01-07 01:06:52.056857 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:52.056860 | orchestrator | 2026-01-07 01:06:52.056865 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-07 01:06:52.056871 | orchestrator | Wednesday 07 January 2026 01:04:25 +0000 (0:00:01.066) 0:00:45.861 ***** 2026-01-07 01:06:52.056876 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:06:52.056882 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:06:52.056888 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:06:52.056893 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:06:52.056899 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:06:52.056902 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:06:52.056907 | orchestrator | 2026-01-07 01:06:52.056912 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-07 01:06:52.056918 | orchestrator | Wednesday 07 January 2026 01:04:27 +0000 (0:00:02.157) 0:00:48.018 ***** 2026-01-07 01:06:52.056922 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056932 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056940 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056944 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056948 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056956 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056960 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056966 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056969 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056980 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056987 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-07 01:06:52.056994 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-07 01:06:52.056998 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057002 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057013 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057017 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057026 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057034 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057048 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057056 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057062 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-07 01:06:52.057071 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057076 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057085 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-07 01:06:52.057089 | orchestrator | 2026-01-07 01:06:52.057094 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-07 01:06:52.057099 | orchestrator | Wednesday 07 January 2026 01:04:34 +0000 (0:00:06.499) 0:00:54.518 ***** 2026-01-07 01:06:52.057105 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057111 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057119 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057124 | orchestrator | 2026-01-07 01:06:52.057129 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-07 01:06:52.057134 | orchestrator | Wednesday 07 January 2026 01:04:36 +0000 (0:00:01.876) 0:00:56.395 ***** 2026-01-07 01:06:52.057140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-07 01:06:52.057155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-07 01:06:52.057160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-07 01:06:52.057165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-07 01:06:52.057170 | orchestrator | 2026-01-07 01:06:52.057174 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:06:52.057179 | orchestrator | Wednesday 07 January 2026 01:04:39 +0000 (0:00:03.193) 0:00:59.588 ***** 2026-01-07 01:06:52.057184 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:06:52.057189 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:06:52.057193 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:06:52.057201 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:06:52.057205 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:06:52.057210 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:06:52.057215 | orchestrator | 2026-01-07 01:06:52.057220 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-07 01:06:52.057225 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:00.989) 0:01:00.577 ***** 2026-01-07 01:06:52.057234 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.057239 | orchestrator | 2026-01-07 01:06:52.057244 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-07 01:06:52.057249 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:00.124) 0:01:00.701 ***** 2026-01-07 01:06:52.057253 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.057258 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.057262 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.057267 | orchestrator | 2026-01-07 01:06:52.057272 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:06:52.057277 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:00.275) 0:01:00.976 ***** 2026-01-07 01:06:52.057282 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:52.057287 | orchestrator | 2026-01-07 01:06:52.057292 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-07 01:06:52.057297 | orchestrator | Wednesday 07 January 2026 01:04:41 +0000 (0:00:00.581) 0:01:01.558 ***** 2026-01-07 01:06:52.057304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.057313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.057323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.057334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.057397 | orchestrator | 2026-01-07 01:06:52.057403 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-07 01:06:52.057408 | orchestrator | Wednesday 07 January 2026 01:04:45 +0000 (0:00:04.332) 0:01:05.890 ***** 2026-01-07 01:06:52.057431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057492 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.057497 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.057502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.057538 | orchestrator | 2026-01-07 01:06:52.057543 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-07 01:06:52.057548 | orchestrator | Wednesday 07 January 2026 01:04:46 +0000 (0:00:01.161) 0:01:07.052 ***** 2026-01-07 01:06:52.057557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057579 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.057587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057623 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.057628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.057636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modul2026-01-07 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:52.057967 | orchestrator | es:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.057985 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.057991 | orchestrator | 2026-01-07 01:06:52.057996 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-07 01:06:52.058002 | orchestrator | Wednesday 07 January 2026 01:04:48 +0000 (0:00:01.327) 0:01:08.379 ***** 2026-01-07 01:06:52.058008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058136 | orchestrator | 2026-01-07 01:06:52.058142 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-07 01:06:52.058146 | orchestrator | Wednesday 07 January 2026 01:04:52 +0000 (0:00:04.382) 0:01:12.761 ***** 2026-01-07 01:06:52.058151 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-07 01:06:52.058156 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.058161 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-07 01:06:52.058169 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.058174 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-07 01:06:52.058179 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.058184 | orchestrator | 2026-01-07 01:06:52.058189 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-01-07 01:06:52.058194 | orchestrator | Wednesday 07 January 2026 01:04:53 +0000 (0:00:00.743) 0:01:13.505 ***** 2026-01-07 01:06:52.058199 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:52.058204 | orchestrator | 2026-01-07 01:06:52.058209 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-01-07 01:06:52.058216 | orchestrator | Wednesday 07 January 2026 01:04:54 +0000 (0:00:01.162) 0:01:14.667 ***** 2026-01-07 01:06:52.058222 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.058226 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.058231 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.058236 | orchestrator | 2026-01-07 01:06:52.058242 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-07 01:06:52.058247 | orchestrator | Wednesday 07 January 2026 01:04:56 +0000 (0:00:02.113) 0:01:16.780 ***** 2026-01-07 01:06:52.058252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058336 | orchestrator | 2026-01-07 01:06:52.058341 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-07 01:06:52.058347 | orchestrator | Wednesday 07 January 2026 01:05:08 +0000 (0:00:12.153) 0:01:28.934 ***** 2026-01-07 01:06:52.058352 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.058357 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.058362 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.058367 | orchestrator | 2026-01-07 01:06:52.058372 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-07 01:06:52.058377 | orchestrator | Wednesday 07 January 2026 01:05:10 +0000 (0:00:01.600) 0:01:30.535 ***** 2026-01-07 01:06:52.058386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058437 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.058443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058474 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.058484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058517 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.058523 | orchestrator | 2026-01-07 01:06:52.058529 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-07 01:06:52.058535 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:00.577) 0:01:31.112 ***** 2026-01-07 01:06:52.058541 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.058547 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.058553 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.058558 | orchestrator | 2026-01-07 01:06:52.058564 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-01-07 01:06:52.058570 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:00.293) 0:01:31.406 ***** 2026-01-07 01:06:52.058579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:06:52.058607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:52.058738 | orchestrator | 2026-01-07 01:06:52.058744 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-01-07 01:06:52.058750 | orchestrator | Wednesday 07 January 2026 01:05:14 +0000 (0:00:03.032) 0:01:34.438 ***** 2026-01-07 01:06:52.058756 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:06:52.058762 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:52.058769 | orchestrator | } 2026-01-07 01:06:52.058775 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:06:52.058781 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:52.058787 | orchestrator | } 2026-01-07 01:06:52.058793 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:06:52.058798 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:06:52.058802 | orchestrator | } 2026-01-07 01:06:52.058807 | orchestrator | 2026-01-07 01:06:52.058812 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:06:52.058817 | orchestrator | Wednesday 07 January 2026 01:05:14 +0000 (0:00:00.558) 0:01:34.996 ***** 2026-01-07 01:06:52.058822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058853 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.058861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058892 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.058898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:06:52.058906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:06:52.058926 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.058931 | orchestrator | 2026-01-07 01:06:52.058939 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:06:52.058944 | orchestrator | Wednesday 07 January 2026 01:05:15 +0000 (0:00:00.836) 0:01:35.833 ***** 2026-01-07 01:06:52.058950 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.058955 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:52.058960 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:52.058965 | orchestrator | 2026-01-07 01:06:52.058970 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-07 01:06:52.058976 | orchestrator | Wednesday 07 January 2026 01:05:16 +0000 (0:00:00.308) 0:01:36.142 ***** 2026-01-07 01:06:52.058981 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.058986 | orchestrator | 2026-01-07 01:06:52.058991 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-07 01:06:52.058997 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:02.507) 0:01:38.649 ***** 2026-01-07 01:06:52.059002 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059007 | orchestrator | 2026-01-07 01:06:52.059012 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-07 01:06:52.059017 | orchestrator | Wednesday 07 January 2026 01:05:21 +0000 (0:00:03.263) 0:01:41.913 ***** 2026-01-07 01:06:52.059023 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059028 | orchestrator | 2026-01-07 01:06:52.059033 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:06:52.059039 | orchestrator | Wednesday 07 January 2026 01:05:39 +0000 (0:00:17.437) 0:01:59.351 ***** 2026-01-07 01:06:52.059044 | orchestrator | 2026-01-07 01:06:52.059049 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:06:52.059054 | orchestrator | Wednesday 07 January 2026 01:05:39 +0000 (0:00:00.066) 0:01:59.417 ***** 2026-01-07 01:06:52.059059 | orchestrator | 2026-01-07 01:06:52.059064 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:06:52.059069 | orchestrator | Wednesday 07 January 2026 01:05:39 +0000 (0:00:00.064) 0:01:59.482 ***** 2026-01-07 01:06:52.059074 | orchestrator | 2026-01-07 01:06:52.059079 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-07 01:06:52.059085 | orchestrator | Wednesday 07 January 2026 01:05:39 +0000 (0:00:00.068) 0:01:59.550 ***** 2026-01-07 01:06:52.059090 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059096 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.059101 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.059106 | orchestrator | 2026-01-07 01:06:52.059112 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-07 01:06:52.059117 | orchestrator | Wednesday 07 January 2026 01:06:05 +0000 (0:00:26.344) 0:02:25.894 ***** 2026-01-07 01:06:52.059122 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059128 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.059137 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.059142 | orchestrator | 2026-01-07 01:06:52.059148 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-07 01:06:52.059153 | orchestrator | Wednesday 07 January 2026 01:06:15 +0000 (0:00:09.391) 0:02:35.285 ***** 2026-01-07 01:06:52.059158 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059163 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.059169 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.059174 | orchestrator | 2026-01-07 01:06:52.059179 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-07 01:06:52.059184 | orchestrator | Wednesday 07 January 2026 01:06:38 +0000 (0:00:22.926) 0:02:58.212 ***** 2026-01-07 01:06:52.059190 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:52.059198 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:52.059203 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:52.059208 | orchestrator | 2026-01-07 01:06:52.059213 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-07 01:06:52.059218 | orchestrator | Wednesday 07 January 2026 01:06:49 +0000 (0:00:10.932) 0:03:09.144 ***** 2026-01-07 01:06:52.059224 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:52.059229 | orchestrator | 2026-01-07 01:06:52.059235 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:06:52.059241 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:06:52.059247 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 01:06:52.059253 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 01:06:52.059258 | orchestrator | 2026-01-07 01:06:52.059263 | orchestrator | 2026-01-07 01:06:52.059268 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:06:52.059273 | orchestrator | Wednesday 07 January 2026 01:06:49 +0000 (0:00:00.237) 0:03:09.381 ***** 2026-01-07 01:06:52.059279 | orchestrator | =============================================================================== 2026-01-07 01:06:52.059284 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.34s 2026-01-07 01:06:52.059289 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.93s 2026-01-07 01:06:52.059294 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.44s 2026-01-07 01:06:52.059300 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 14.12s 2026-01-07 01:06:52.059305 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.15s 2026-01-07 01:06:52.059310 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.93s 2026-01-07 01:06:52.059315 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.39s 2026-01-07 01:06:52.059321 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.70s 2026-01-07 01:06:52.059329 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 7.25s 2026-01-07 01:06:52.059335 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.50s 2026-01-07 01:06:52.059340 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.38s 2026-01-07 01:06:52.059345 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.33s 2026-01-07 01:06:52.059351 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.07s 2026-01-07 01:06:52.059356 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.06s 2026-01-07 01:06:52.059362 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.33s 2026-01-07 01:06:52.059367 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 3.26s 2026-01-07 01:06:52.059376 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.19s 2026-01-07 01:06:52.059381 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.03s 2026-01-07 01:06:52.059386 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.51s 2026-01-07 01:06:52.059391 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.45s 2026-01-07 01:06:55.096687 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:55.097741 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:55.098864 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:55.099000 | orchestrator | 2026-01-07 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:58.139859 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:06:58.141726 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:06:58.143335 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:06:58.143487 | orchestrator | 2026-01-07 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:01.189227 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:01.190800 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:01.192447 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:01.192521 | orchestrator | 2026-01-07 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:04.235238 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:04.236924 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:04.239154 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:04.239196 | orchestrator | 2026-01-07 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:07.284355 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:07.285481 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:07.286957 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:07.287064 | orchestrator | 2026-01-07 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:10.335441 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:10.338965 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:10.340788 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:10.340836 | orchestrator | 2026-01-07 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:13.385230 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:13.386995 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:13.388388 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:13.388429 | orchestrator | 2026-01-07 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:16.427732 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:16.427917 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:16.428984 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:16.429013 | orchestrator | 2026-01-07 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:19.472634 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:19.474128 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:19.475748 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:19.475783 | orchestrator | 2026-01-07 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:22.518951 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:22.519818 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:22.520733 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:22.520764 | orchestrator | 2026-01-07 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:25.565480 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:25.567109 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:25.568996 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:25.569041 | orchestrator | 2026-01-07 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:28.608921 | orchestrator | 2026-01-07 01:07:28 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:28.610285 | orchestrator | 2026-01-07 01:07:28 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:28.612542 | orchestrator | 2026-01-07 01:07:28 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:28.612591 | orchestrator | 2026-01-07 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:31.661346 | orchestrator | 2026-01-07 01:07:31 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:31.662963 | orchestrator | 2026-01-07 01:07:31 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:31.664849 | orchestrator | 2026-01-07 01:07:31 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:31.664899 | orchestrator | 2026-01-07 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:34.708890 | orchestrator | 2026-01-07 01:07:34 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:34.710205 | orchestrator | 2026-01-07 01:07:34 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:34.712134 | orchestrator | 2026-01-07 01:07:34 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:34.712165 | orchestrator | 2026-01-07 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:37.754951 | orchestrator | 2026-01-07 01:07:37 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:37.756970 | orchestrator | 2026-01-07 01:07:37 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:37.758942 | orchestrator | 2026-01-07 01:07:37 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:37.758982 | orchestrator | 2026-01-07 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:40.805391 | orchestrator | 2026-01-07 01:07:40 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:40.807137 | orchestrator | 2026-01-07 01:07:40 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:40.809178 | orchestrator | 2026-01-07 01:07:40 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:40.809218 | orchestrator | 2026-01-07 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:43.848997 | orchestrator | 2026-01-07 01:07:43 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:43.850959 | orchestrator | 2026-01-07 01:07:43 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:43.851000 | orchestrator | 2026-01-07 01:07:43 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:43.851005 | orchestrator | 2026-01-07 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:46.897009 | orchestrator | 2026-01-07 01:07:46 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:46.898466 | orchestrator | 2026-01-07 01:07:46 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:46.901446 | orchestrator | 2026-01-07 01:07:46 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:46.901499 | orchestrator | 2026-01-07 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:49.942611 | orchestrator | 2026-01-07 01:07:49 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:49.944617 | orchestrator | 2026-01-07 01:07:49 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:49.946534 | orchestrator | 2026-01-07 01:07:49 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state STARTED 2026-01-07 01:07:49.946584 | orchestrator | 2026-01-07 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:52.991956 | orchestrator | 2026-01-07 01:07:52 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:52.994371 | orchestrator | 2026-01-07 01:07:52 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:53.002236 | orchestrator | 2026-01-07 01:07:53.002310 | orchestrator | 2026-01-07 01:07:53.002319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:07:53.002326 | orchestrator | 2026-01-07 01:07:53.002332 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:07:53.002338 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:00.191) 0:00:00.191 ***** 2026-01-07 01:07:53.002343 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:53.002350 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:07:53.002355 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:07:53.002360 | orchestrator | 2026-01-07 01:07:53.002387 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:07:53.002394 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:00.265) 0:00:00.457 ***** 2026-01-07 01:07:53.002399 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-07 01:07:53.002405 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-07 01:07:53.002410 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-07 01:07:53.002415 | orchestrator | 2026-01-07 01:07:53.002432 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-07 01:07:53.002438 | orchestrator | 2026-01-07 01:07:53.002443 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:07:53.002448 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:00.362) 0:00:00.819 ***** 2026-01-07 01:07:53.002455 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:07:53.002461 | orchestrator | 2026-01-07 01:07:53.002466 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-07 01:07:53.002471 | orchestrator | Wednesday 07 January 2026 01:06:21 +0000 (0:00:00.476) 0:00:01.295 ***** 2026-01-07 01:07:53.002479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002497 | orchestrator | 2026-01-07 01:07:53.002502 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-07 01:07:53.002508 | orchestrator | Wednesday 07 January 2026 01:06:21 +0000 (0:00:00.617) 0:00:01.912 ***** 2026-01-07 01:07:53.002513 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:07:53.002519 | orchestrator | 2026-01-07 01:07:53.002523 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:07:53.002531 | orchestrator | Wednesday 07 January 2026 01:06:22 +0000 (0:00:00.718) 0:00:02.631 ***** 2026-01-07 01:07:53.002534 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:07:53.002537 | orchestrator | 2026-01-07 01:07:53.002541 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-07 01:07:53.002555 | orchestrator | Wednesday 07 January 2026 01:06:23 +0000 (0:00:00.593) 0:00:03.225 ***** 2026-01-07 01:07:53.002561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002571 | orchestrator | 2026-01-07 01:07:53.002574 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-07 01:07:53.002577 | orchestrator | Wednesday 07 January 2026 01:06:24 +0000 (0:00:01.042) 0:00:04.268 ***** 2026-01-07 01:07:53.002581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002584 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.002588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002594 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.002600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002606 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.002615 | orchestrator | 2026-01-07 01:07:53.002619 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-07 01:07:53.002622 | orchestrator | Wednesday 07 January 2026 01:06:24 +0000 (0:00:00.380) 0:00:04.648 ***** 2026-01-07 01:07:53.002625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002628 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.002631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002635 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.002638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.002644 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.002647 | orchestrator | 2026-01-07 01:07:53.002650 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-07 01:07:53.002653 | orchestrator | Wednesday 07 January 2026 01:06:25 +0000 (0:00:00.731) 0:00:05.380 ***** 2026-01-07 01:07:53.002660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002672 | orchestrator | 2026-01-07 01:07:53.002675 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-07 01:07:53.002678 | orchestrator | Wednesday 07 January 2026 01:06:26 +0000 (0:00:01.065) 0:00:06.445 ***** 2026-01-07 01:07:53.002681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002694 | orchestrator | 2026-01-07 01:07:53.002697 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-07 01:07:53.002702 | orchestrator | Wednesday 07 January 2026 01:06:27 +0000 (0:00:01.109) 0:00:07.554 ***** 2026-01-07 01:07:53.002705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.002708 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.002712 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.002715 | orchestrator | 2026-01-07 01:07:53.002718 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-07 01:07:53.002721 | orchestrator | Wednesday 07 January 2026 01:06:27 +0000 (0:00:00.479) 0:00:08.034 ***** 2026-01-07 01:07:53.002724 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:07:53.002727 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:07:53.002730 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:07:53.002734 | orchestrator | 2026-01-07 01:07:53.002739 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-07 01:07:53.002742 | orchestrator | Wednesday 07 January 2026 01:06:29 +0000 (0:00:01.184) 0:00:09.218 ***** 2026-01-07 01:07:53.002745 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:07:53.002749 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:07:53.002752 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:07:53.002755 | orchestrator | 2026-01-07 01:07:53.002758 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-07 01:07:53.002761 | orchestrator | Wednesday 07 January 2026 01:06:30 +0000 (0:00:01.123) 0:00:10.342 ***** 2026-01-07 01:07:53.002764 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:07:53.002767 | orchestrator | 2026-01-07 01:07:53.002771 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-07 01:07:53.002774 | orchestrator | Wednesday 07 January 2026 01:06:30 +0000 (0:00:00.706) 0:00:11.049 ***** 2026-01-07 01:07:53.002777 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:53.002780 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:07:53.002784 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:07:53.002787 | orchestrator | 2026-01-07 01:07:53.002791 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-07 01:07:53.002795 | orchestrator | Wednesday 07 January 2026 01:06:31 +0000 (0:00:00.678) 0:00:11.727 ***** 2026-01-07 01:07:53.002873 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:53.002878 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:53.002882 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:53.002885 | orchestrator | 2026-01-07 01:07:53.002889 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-07 01:07:53.002893 | orchestrator | Wednesday 07 January 2026 01:06:32 +0000 (0:00:01.336) 0:00:13.064 ***** 2026-01-07 01:07:53.002897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:07:53.002914 | orchestrator | 2026-01-07 01:07:53.002918 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-07 01:07:53.002922 | orchestrator | Wednesday 07 January 2026 01:06:34 +0000 (0:00:01.165) 0:00:14.230 ***** 2026-01-07 01:07:53.002927 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:07:53.002932 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:07:53.002937 | orchestrator | } 2026-01-07 01:07:53.002942 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:07:53.002947 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:07:53.002952 | orchestrator | } 2026-01-07 01:07:53.002958 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:07:53.002964 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:07:53.002976 | orchestrator | } 2026-01-07 01:07:53.002982 | orchestrator | 2026-01-07 01:07:53.002986 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:07:53.002990 | orchestrator | Wednesday 07 January 2026 01:06:34 +0000 (0:00:00.326) 0:00:14.557 ***** 2026-01-07 01:07:53.002994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.003001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.003005 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.003009 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.003013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:07:53.003018 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.003023 | orchestrator | 2026-01-07 01:07:53.003030 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-07 01:07:53.003038 | orchestrator | Wednesday 07 January 2026 01:06:35 +0000 (0:00:00.708) 0:00:15.266 ***** 2026-01-07 01:07:53.003043 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:53.003048 | orchestrator | 2026-01-07 01:07:53.003052 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-07 01:07:53.003057 | orchestrator | Wednesday 07 January 2026 01:06:37 +0000 (0:00:02.215) 0:00:17.482 ***** 2026-01-07 01:07:53.003062 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:53.003067 | orchestrator | 2026-01-07 01:07:53.003072 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:07:53.003077 | orchestrator | Wednesday 07 January 2026 01:06:39 +0000 (0:00:02.184) 0:00:19.666 ***** 2026-01-07 01:07:53.003082 | orchestrator | 2026-01-07 01:07:53.003086 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:07:53.003091 | orchestrator | Wednesday 07 January 2026 01:06:39 +0000 (0:00:00.076) 0:00:19.742 ***** 2026-01-07 01:07:53.003095 | orchestrator | 2026-01-07 01:07:53.003100 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:07:53.003108 | orchestrator | Wednesday 07 January 2026 01:06:39 +0000 (0:00:00.065) 0:00:19.807 ***** 2026-01-07 01:07:53.003113 | orchestrator | 2026-01-07 01:07:53.003118 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-07 01:07:53.003123 | orchestrator | Wednesday 07 January 2026 01:06:39 +0000 (0:00:00.070) 0:00:19.878 ***** 2026-01-07 01:07:53.003128 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.003134 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.003144 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:07:53.003149 | orchestrator | 2026-01-07 01:07:53.003154 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-07 01:07:53.003160 | orchestrator | Wednesday 07 January 2026 01:06:42 +0000 (0:00:02.420) 0:00:22.299 ***** 2026-01-07 01:07:53.003163 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.003167 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.003171 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-07 01:07:53.003177 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-07 01:07:53.003180 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-07 01:07:53.003184 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:53.003187 | orchestrator | 2026-01-07 01:07:53.003190 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-07 01:07:53.003193 | orchestrator | Wednesday 07 January 2026 01:07:20 +0000 (0:00:38.335) 0:01:00.635 ***** 2026-01-07 01:07:53.003196 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.003199 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:07:53.003202 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:07:53.003206 | orchestrator | 2026-01-07 01:07:53.003209 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-07 01:07:53.003212 | orchestrator | Wednesday 07 January 2026 01:07:46 +0000 (0:00:26.311) 0:01:26.946 ***** 2026-01-07 01:07:53.003215 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:07:53.003218 | orchestrator | 2026-01-07 01:07:53.003221 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-07 01:07:53.003224 | orchestrator | Wednesday 07 January 2026 01:07:48 +0000 (0:00:02.006) 0:01:28.953 ***** 2026-01-07 01:07:53.003227 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.003230 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:07:53.003233 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:07:53.003237 | orchestrator | 2026-01-07 01:07:53.003240 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-07 01:07:53.003243 | orchestrator | Wednesday 07 January 2026 01:07:49 +0000 (0:00:00.302) 0:01:29.255 ***** 2026-01-07 01:07:53.003247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-07 01:07:53.003253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-07 01:07:53.003259 | orchestrator | 2026-01-07 01:07:53.003264 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-07 01:07:53.003272 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:02.425) 0:01:31.680 ***** 2026-01-07 01:07:53.003278 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:07:53.003283 | orchestrator | 2026-01-07 01:07:53.003287 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:07:53.003293 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:07:53.003301 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:07:53.003306 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:07:53.003316 | orchestrator | 2026-01-07 01:07:53.003322 | orchestrator | 2026-01-07 01:07:53.003325 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:07:53.003329 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:00.244) 0:01:31.925 ***** 2026-01-07 01:07:53.003332 | orchestrator | =============================================================================== 2026-01-07 01:07:53.003335 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.34s 2026-01-07 01:07:53.003339 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.31s 2026-01-07 01:07:53.003343 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.43s 2026-01-07 01:07:53.003348 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.42s 2026-01-07 01:07:53.003352 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.22s 2026-01-07 01:07:53.003357 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.18s 2026-01-07 01:07:53.003361 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.01s 2026-01-07 01:07:53.003370 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.34s 2026-01-07 01:07:53.003376 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.18s 2026-01-07 01:07:53.003381 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.17s 2026-01-07 01:07:53.003386 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.12s 2026-01-07 01:07:53.003391 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.11s 2026-01-07 01:07:53.003397 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.07s 2026-01-07 01:07:53.003400 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.04s 2026-01-07 01:07:53.003403 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.73s 2026-01-07 01:07:53.003406 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.72s 2026-01-07 01:07:53.003412 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.71s 2026-01-07 01:07:53.003417 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.71s 2026-01-07 01:07:53.003422 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.68s 2026-01-07 01:07:53.003427 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.62s 2026-01-07 01:07:53.003434 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 610b01c6-e476-4df2-9e2f-24c924b1e414 is in state SUCCESS 2026-01-07 01:07:53.004070 | orchestrator | 2026-01-07 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:56.038360 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:56.040973 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:56.041028 | orchestrator | 2026-01-07 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:59.082864 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:07:59.084998 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:07:59.085061 | orchestrator | 2026-01-07 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:02.117824 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:02.120035 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:02.120094 | orchestrator | 2026-01-07 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:05.178244 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:05.181266 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:05.181310 | orchestrator | 2026-01-07 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:08.218160 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:08.221051 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:08.221593 | orchestrator | 2026-01-07 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:11.264005 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:11.265328 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:11.265383 | orchestrator | 2026-01-07 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:14.314879 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:14.317716 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:14.317782 | orchestrator | 2026-01-07 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:17.369530 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:17.370838 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:17.370891 | orchestrator | 2026-01-07 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:20.424826 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:20.428635 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:20.428695 | orchestrator | 2026-01-07 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:23.480487 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:23.483298 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:23.483355 | orchestrator | 2026-01-07 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:26.530390 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:26.532674 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:26.532815 | orchestrator | 2026-01-07 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:29.580074 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:29.581683 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:29.581728 | orchestrator | 2026-01-07 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:32.626946 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:32.629005 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:32.629127 | orchestrator | 2026-01-07 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:35.675999 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:35.677655 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:35.677717 | orchestrator | 2026-01-07 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:38.723976 | orchestrator | 2026-01-07 01:08:38 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:38.726358 | orchestrator | 2026-01-07 01:08:38 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:38.726414 | orchestrator | 2026-01-07 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:41.763799 | orchestrator | 2026-01-07 01:08:41 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:41.765592 | orchestrator | 2026-01-07 01:08:41 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state STARTED 2026-01-07 01:08:41.765697 | orchestrator | 2026-01-07 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:44.809641 | orchestrator | 2026-01-07 01:08:44 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:44.811343 | orchestrator | 2026-01-07 01:08:44 | INFO  | Task 7a7b746d-b865-44db-8460-083e9cd9b488 is in state SUCCESS 2026-01-07 01:08:44.811627 | orchestrator | 2026-01-07 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:47.858833 | orchestrator | 2026-01-07 01:08:47 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:47.859503 | orchestrator | 2026-01-07 01:08:47 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:08:47.859597 | orchestrator | 2026-01-07 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:50.895713 | orchestrator | 2026-01-07 01:08:50 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:50.897464 | orchestrator | 2026-01-07 01:08:50 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:08:50.897525 | orchestrator | 2026-01-07 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:53.934521 | orchestrator | 2026-01-07 01:08:53 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:53.936140 | orchestrator | 2026-01-07 01:08:53 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:08:53.936270 | orchestrator | 2026-01-07 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:56.983865 | orchestrator | 2026-01-07 01:08:56 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:08:56.983923 | orchestrator | 2026-01-07 01:08:56 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:08:56.983930 | orchestrator | 2026-01-07 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:00.034792 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:00.036650 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:00.036698 | orchestrator | 2026-01-07 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:03.078926 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:03.079812 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:03.079943 | orchestrator | 2026-01-07 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:06.127659 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:06.127718 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:06.127727 | orchestrator | 2026-01-07 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:09.183759 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:09.185929 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:09.185970 | orchestrator | 2026-01-07 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:12.227938 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:12.229695 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:12.229748 | orchestrator | 2026-01-07 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:15.276144 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:15.278446 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:15.278508 | orchestrator | 2026-01-07 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:18.324417 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:18.326213 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:18.326286 | orchestrator | 2026-01-07 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:21.381480 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:21.382946 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:21.382979 | orchestrator | 2026-01-07 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:24.444996 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:24.446655 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:24.446864 | orchestrator | 2026-01-07 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:27.497865 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:27.499788 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:27.499840 | orchestrator | 2026-01-07 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:30.564094 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:30.566274 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:30.566930 | orchestrator | 2026-01-07 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:33.623115 | orchestrator | 2026-01-07 01:09:33 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:33.623194 | orchestrator | 2026-01-07 01:09:33 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:33.623204 | orchestrator | 2026-01-07 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:36.675500 | orchestrator | 2026-01-07 01:09:36 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:36.677577 | orchestrator | 2026-01-07 01:09:36 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:36.677664 | orchestrator | 2026-01-07 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:39.736403 | orchestrator | 2026-01-07 01:09:39 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:39.737716 | orchestrator | 2026-01-07 01:09:39 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:39.737754 | orchestrator | 2026-01-07 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:42.783608 | orchestrator | 2026-01-07 01:09:42 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:42.785516 | orchestrator | 2026-01-07 01:09:42 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:42.785599 | orchestrator | 2026-01-07 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:45.846466 | orchestrator | 2026-01-07 01:09:45 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:45.847896 | orchestrator | 2026-01-07 01:09:45 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:45.848100 | orchestrator | 2026-01-07 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:48.908188 | orchestrator | 2026-01-07 01:09:48 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:48.910175 | orchestrator | 2026-01-07 01:09:48 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:48.910217 | orchestrator | 2026-01-07 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:51.953752 | orchestrator | 2026-01-07 01:09:51 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:51.955113 | orchestrator | 2026-01-07 01:09:51 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:51.955602 | orchestrator | 2026-01-07 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:55.005207 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:55.007765 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:55.008638 | orchestrator | 2026-01-07 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:58.051730 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:09:58.052564 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:09:58.053865 | orchestrator | 2026-01-07 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:01.093280 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:01.094005 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:01.094094 | orchestrator | 2026-01-07 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:04.144020 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:04.145946 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:04.146274 | orchestrator | 2026-01-07 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:07.180870 | orchestrator | 2026-01-07 01:10:07 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:07.180954 | orchestrator | 2026-01-07 01:10:07 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:07.180986 | orchestrator | 2026-01-07 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:10.232862 | orchestrator | 2026-01-07 01:10:10 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:10.234343 | orchestrator | 2026-01-07 01:10:10 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:10.234387 | orchestrator | 2026-01-07 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:13.279443 | orchestrator | 2026-01-07 01:10:13 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:13.281605 | orchestrator | 2026-01-07 01:10:13 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:13.281679 | orchestrator | 2026-01-07 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:16.332585 | orchestrator | 2026-01-07 01:10:16 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:16.334686 | orchestrator | 2026-01-07 01:10:16 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:16.334747 | orchestrator | 2026-01-07 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:19.380776 | orchestrator | 2026-01-07 01:10:19 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:19.381843 | orchestrator | 2026-01-07 01:10:19 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:19.382171 | orchestrator | 2026-01-07 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:22.430908 | orchestrator | 2026-01-07 01:10:22 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:22.433429 | orchestrator | 2026-01-07 01:10:22 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:22.433488 | orchestrator | 2026-01-07 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:25.481733 | orchestrator | 2026-01-07 01:10:25 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:25.482814 | orchestrator | 2026-01-07 01:10:25 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:25.482871 | orchestrator | 2026-01-07 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:28.533841 | orchestrator | 2026-01-07 01:10:28 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:28.536589 | orchestrator | 2026-01-07 01:10:28 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:28.536724 | orchestrator | 2026-01-07 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:31.570108 | orchestrator | 2026-01-07 01:10:31 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:31.571992 | orchestrator | 2026-01-07 01:10:31 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:31.572057 | orchestrator | 2026-01-07 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:34.608708 | orchestrator | 2026-01-07 01:10:34 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:34.610805 | orchestrator | 2026-01-07 01:10:34 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:34.610865 | orchestrator | 2026-01-07 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:37.651116 | orchestrator | 2026-01-07 01:10:37 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:37.654751 | orchestrator | 2026-01-07 01:10:37 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:37.655692 | orchestrator | 2026-01-07 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:40.706147 | orchestrator | 2026-01-07 01:10:40 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:40.708524 | orchestrator | 2026-01-07 01:10:40 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:40.708610 | orchestrator | 2026-01-07 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:43.744366 | orchestrator | 2026-01-07 01:10:43 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:43.747399 | orchestrator | 2026-01-07 01:10:43 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:43.747498 | orchestrator | 2026-01-07 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:46.775863 | orchestrator | 2026-01-07 01:10:46 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:46.776807 | orchestrator | 2026-01-07 01:10:46 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:46.776839 | orchestrator | 2026-01-07 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:49.815900 | orchestrator | 2026-01-07 01:10:49 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:49.817954 | orchestrator | 2026-01-07 01:10:49 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:49.817989 | orchestrator | 2026-01-07 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:52.853467 | orchestrator | 2026-01-07 01:10:52 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:52.854589 | orchestrator | 2026-01-07 01:10:52 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:52.855036 | orchestrator | 2026-01-07 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:55.896817 | orchestrator | 2026-01-07 01:10:55 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:55.898948 | orchestrator | 2026-01-07 01:10:55 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:55.899560 | orchestrator | 2026-01-07 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:58.946912 | orchestrator | 2026-01-07 01:10:58 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:10:58.946991 | orchestrator | 2026-01-07 01:10:58 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:10:58.947009 | orchestrator | 2026-01-07 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:01.974390 | orchestrator | 2026-01-07 01:11:01 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:01.976773 | orchestrator | 2026-01-07 01:11:01 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:01.976859 | orchestrator | 2026-01-07 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:05.013415 | orchestrator | 2026-01-07 01:11:05 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:05.014519 | orchestrator | 2026-01-07 01:11:05 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:05.014564 | orchestrator | 2026-01-07 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:08.055392 | orchestrator | 2026-01-07 01:11:08 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:08.055544 | orchestrator | 2026-01-07 01:11:08 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:08.055554 | orchestrator | 2026-01-07 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:11.099796 | orchestrator | 2026-01-07 01:11:11 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:11.101921 | orchestrator | 2026-01-07 01:11:11 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:11.102402 | orchestrator | 2026-01-07 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:14.144197 | orchestrator | 2026-01-07 01:11:14 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:14.146584 | orchestrator | 2026-01-07 01:11:14 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:14.146637 | orchestrator | 2026-01-07 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:17.196906 | orchestrator | 2026-01-07 01:11:17 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:17.199465 | orchestrator | 2026-01-07 01:11:17 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:17.199897 | orchestrator | 2026-01-07 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:20.253430 | orchestrator | 2026-01-07 01:11:20 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:20.255971 | orchestrator | 2026-01-07 01:11:20 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:20.256077 | orchestrator | 2026-01-07 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:23.306461 | orchestrator | 2026-01-07 01:11:23 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:23.306518 | orchestrator | 2026-01-07 01:11:23 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:23.306527 | orchestrator | 2026-01-07 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:26.337214 | orchestrator | 2026-01-07 01:11:26 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:26.338348 | orchestrator | 2026-01-07 01:11:26 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:26.338430 | orchestrator | 2026-01-07 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:29.385300 | orchestrator | 2026-01-07 01:11:29 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:29.387289 | orchestrator | 2026-01-07 01:11:29 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:29.387364 | orchestrator | 2026-01-07 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:32.427572 | orchestrator | 2026-01-07 01:11:32 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:32.427983 | orchestrator | 2026-01-07 01:11:32 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:32.428004 | orchestrator | 2026-01-07 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:35.472813 | orchestrator | 2026-01-07 01:11:35 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:35.474031 | orchestrator | 2026-01-07 01:11:35 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:35.474086 | orchestrator | 2026-01-07 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:38.503044 | orchestrator | 2026-01-07 01:11:38 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:38.503257 | orchestrator | 2026-01-07 01:11:38 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:38.503274 | orchestrator | 2026-01-07 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:41.535052 | orchestrator | 2026-01-07 01:11:41 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:41.536576 | orchestrator | 2026-01-07 01:11:41 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:41.536626 | orchestrator | 2026-01-07 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:44.574606 | orchestrator | 2026-01-07 01:11:44 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:44.581001 | orchestrator | 2026-01-07 01:11:44 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:44.581648 | orchestrator | 2026-01-07 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:47.620305 | orchestrator | 2026-01-07 01:11:47 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:47.622199 | orchestrator | 2026-01-07 01:11:47 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:47.622292 | orchestrator | 2026-01-07 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:50.653144 | orchestrator | 2026-01-07 01:11:50 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:50.653489 | orchestrator | 2026-01-07 01:11:50 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:50.654442 | orchestrator | 2026-01-07 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:53.702110 | orchestrator | 2026-01-07 01:11:53 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:53.703135 | orchestrator | 2026-01-07 01:11:53 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:53.703600 | orchestrator | 2026-01-07 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:56.734898 | orchestrator | 2026-01-07 01:11:56 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:56.735923 | orchestrator | 2026-01-07 01:11:56 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:56.736067 | orchestrator | 2026-01-07 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:59.769082 | orchestrator | 2026-01-07 01:11:59 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:11:59.771334 | orchestrator | 2026-01-07 01:11:59 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:11:59.771408 | orchestrator | 2026-01-07 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:02.809929 | orchestrator | 2026-01-07 01:12:02 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:02.811878 | orchestrator | 2026-01-07 01:12:02 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:02.812342 | orchestrator | 2026-01-07 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:05.853352 | orchestrator | 2026-01-07 01:12:05 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:05.854113 | orchestrator | 2026-01-07 01:12:05 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:05.854574 | orchestrator | 2026-01-07 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:08.901012 | orchestrator | 2026-01-07 01:12:08 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:08.902624 | orchestrator | 2026-01-07 01:12:08 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:08.902830 | orchestrator | 2026-01-07 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:11.941842 | orchestrator | 2026-01-07 01:12:11 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:11.943536 | orchestrator | 2026-01-07 01:12:11 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:11.943579 | orchestrator | 2026-01-07 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:14.985438 | orchestrator | 2026-01-07 01:12:14 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:14.987459 | orchestrator | 2026-01-07 01:12:14 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:14.987514 | orchestrator | 2026-01-07 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:18.045003 | orchestrator | 2026-01-07 01:12:18 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:18.045894 | orchestrator | 2026-01-07 01:12:18 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:18.045923 | orchestrator | 2026-01-07 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:21.094493 | orchestrator | 2026-01-07 01:12:21 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:21.097085 | orchestrator | 2026-01-07 01:12:21 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:21.097130 | orchestrator | 2026-01-07 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:24.141066 | orchestrator | 2026-01-07 01:12:24 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:24.142353 | orchestrator | 2026-01-07 01:12:24 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:24.142442 | orchestrator | 2026-01-07 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:27.190198 | orchestrator | 2026-01-07 01:12:27 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:27.191227 | orchestrator | 2026-01-07 01:12:27 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:27.191280 | orchestrator | 2026-01-07 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:30.242683 | orchestrator | 2026-01-07 01:12:30 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:30.244813 | orchestrator | 2026-01-07 01:12:30 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:30.244971 | orchestrator | 2026-01-07 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:33.290415 | orchestrator | 2026-01-07 01:12:33 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:33.292365 | orchestrator | 2026-01-07 01:12:33 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:33.292535 | orchestrator | 2026-01-07 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:36.336229 | orchestrator | 2026-01-07 01:12:36 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:36.338933 | orchestrator | 2026-01-07 01:12:36 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:36.339288 | orchestrator | 2026-01-07 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:39.385992 | orchestrator | 2026-01-07 01:12:39 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:39.389220 | orchestrator | 2026-01-07 01:12:39 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:39.389274 | orchestrator | 2026-01-07 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:42.432280 | orchestrator | 2026-01-07 01:12:42 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:42.434730 | orchestrator | 2026-01-07 01:12:42 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:42.434896 | orchestrator | 2026-01-07 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:45.481370 | orchestrator | 2026-01-07 01:12:45 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:45.483628 | orchestrator | 2026-01-07 01:12:45 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:45.483698 | orchestrator | 2026-01-07 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:48.522547 | orchestrator | 2026-01-07 01:12:48 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:48.523402 | orchestrator | 2026-01-07 01:12:48 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:48.523428 | orchestrator | 2026-01-07 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:51.555797 | orchestrator | 2026-01-07 01:12:51 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:51.557276 | orchestrator | 2026-01-07 01:12:51 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:51.557538 | orchestrator | 2026-01-07 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:54.598100 | orchestrator | 2026-01-07 01:12:54 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:54.600335 | orchestrator | 2026-01-07 01:12:54 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:54.600662 | orchestrator | 2026-01-07 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:57.646973 | orchestrator | 2026-01-07 01:12:57 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:12:57.647823 | orchestrator | 2026-01-07 01:12:57 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:12:57.647876 | orchestrator | 2026-01-07 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:00.703786 | orchestrator | 2026-01-07 01:13:00 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:00.704971 | orchestrator | 2026-01-07 01:13:00 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:00.705004 | orchestrator | 2026-01-07 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:03.749085 | orchestrator | 2026-01-07 01:13:03 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:03.749498 | orchestrator | 2026-01-07 01:13:03 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:03.749621 | orchestrator | 2026-01-07 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:06.797473 | orchestrator | 2026-01-07 01:13:06 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:06.799217 | orchestrator | 2026-01-07 01:13:06 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:06.799304 | orchestrator | 2026-01-07 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:09.837455 | orchestrator | 2026-01-07 01:13:09 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:09.837694 | orchestrator | 2026-01-07 01:13:09 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:09.837938 | orchestrator | 2026-01-07 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:12.867631 | orchestrator | 2026-01-07 01:13:12 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:12.867690 | orchestrator | 2026-01-07 01:13:12 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:12.867700 | orchestrator | 2026-01-07 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:15.897016 | orchestrator | 2026-01-07 01:13:15 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:15.897343 | orchestrator | 2026-01-07 01:13:15 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:15.897373 | orchestrator | 2026-01-07 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:18.928086 | orchestrator | 2026-01-07 01:13:18 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:18.928399 | orchestrator | 2026-01-07 01:13:18 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:18.928426 | orchestrator | 2026-01-07 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:21.970068 | orchestrator | 2026-01-07 01:13:21 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:21.971334 | orchestrator | 2026-01-07 01:13:21 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:21.971374 | orchestrator | 2026-01-07 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:25.017362 | orchestrator | 2026-01-07 01:13:25 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:25.019110 | orchestrator | 2026-01-07 01:13:25 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state STARTED 2026-01-07 01:13:25.019277 | orchestrator | 2026-01-07 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:28.062141 | orchestrator | 2026-01-07 01:13:28 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:28.066830 | orchestrator | 2026-01-07 01:13:28 | INFO  | Task 2a01fcce-f58d-472a-8b0b-e8ddbbd32a39 is in state SUCCESS 2026-01-07 01:13:28.068316 | orchestrator | 2026-01-07 01:13:28.068363 | orchestrator | 2026-01-07 01:13:28.068372 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:13:28.068410 | orchestrator | 2026-01-07 01:13:28.068418 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:13:28.068425 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:00.152) 0:00:00.152 ***** 2026-01-07 01:13:28.068433 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.068441 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.068448 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.068455 | orchestrator | 2026-01-07 01:13:28.068462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:13:28.068470 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:00.328) 0:00:00.480 ***** 2026-01-07 01:13:28.068477 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:13:28.068496 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:13:28.068504 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:13:28.068536 | orchestrator | 2026-01-07 01:13:28.068543 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-07 01:13:28.068549 | orchestrator | 2026-01-07 01:13:28.068555 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-07 01:13:28.068562 | orchestrator | Wednesday 07 January 2026 01:05:12 +0000 (0:00:00.539) 0:00:01.020 ***** 2026-01-07 01:13:28.068568 | orchestrator | 2026-01-07 01:13:28.068574 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:13:28.068581 | orchestrator | 2026-01-07 01:13:28.068587 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:13:28.068593 | orchestrator | 2026-01-07 01:13:28.068599 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:13:28.068606 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.068612 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.068618 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.068624 | orchestrator | 2026-01-07 01:13:28.068631 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:13:28.068638 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:13:28.068645 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:13:28.068652 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:13:28.068659 | orchestrator | 2026-01-07 01:13:28.068665 | orchestrator | 2026-01-07 01:13:28.068671 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:13:28.068678 | orchestrator | Wednesday 07 January 2026 01:08:43 +0000 (0:03:31.726) 0:03:32.747 ***** 2026-01-07 01:13:28.068685 | orchestrator | =============================================================================== 2026-01-07 01:13:28.068692 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 211.73s 2026-01-07 01:13:28.068699 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-07 01:13:28.068799 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-07 01:13:28.068807 | orchestrator | 2026-01-07 01:13:28.068814 | orchestrator | 2026-01-07 01:13:28.068821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:13:28.068828 | orchestrator | 2026-01-07 01:13:28.068835 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:13:28.068842 | orchestrator | Wednesday 07 January 2026 01:08:48 +0000 (0:00:00.195) 0:00:00.195 ***** 2026-01-07 01:13:28.068848 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.068872 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.068880 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.068888 | orchestrator | 2026-01-07 01:13:28.068896 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:13:28.068903 | orchestrator | Wednesday 07 January 2026 01:08:48 +0000 (0:00:00.256) 0:00:00.451 ***** 2026-01-07 01:13:28.068910 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-07 01:13:28.068937 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-07 01:13:28.068945 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-07 01:13:28.068951 | orchestrator | 2026-01-07 01:13:28.068958 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-07 01:13:28.068965 | orchestrator | 2026-01-07 01:13:28.068973 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.068981 | orchestrator | Wednesday 07 January 2026 01:08:49 +0000 (0:00:00.343) 0:00:00.795 ***** 2026-01-07 01:13:28.068988 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:28.068996 | orchestrator | 2026-01-07 01:13:28.069003 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-01-07 01:13:28.069011 | orchestrator | Wednesday 07 January 2026 01:08:49 +0000 (0:00:00.393) 0:00:01.188 ***** 2026-01-07 01:13:28.069020 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-07 01:13:28.069028 | orchestrator | 2026-01-07 01:13:28.069044 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-01-07 01:13:28.069053 | orchestrator | Wednesday 07 January 2026 01:08:53 +0000 (0:00:03.669) 0:00:04.857 ***** 2026-01-07 01:13:28.069061 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-07 01:13:28.069070 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-07 01:13:28.069078 | orchestrator | 2026-01-07 01:13:28.069087 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-07 01:13:28.069095 | orchestrator | Wednesday 07 January 2026 01:08:59 +0000 (0:00:05.888) 0:00:10.746 ***** 2026-01-07 01:13:28.069115 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:13:28.069123 | orchestrator | 2026-01-07 01:13:28.069130 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-07 01:13:28.069137 | orchestrator | Wednesday 07 January 2026 01:09:02 +0000 (0:00:03.163) 0:00:13.909 ***** 2026-01-07 01:13:28.069144 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:13:28.069150 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:13:28.069157 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:13:28.069164 | orchestrator | 2026-01-07 01:13:28.069171 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-07 01:13:28.069177 | orchestrator | Wednesday 07 January 2026 01:09:09 +0000 (0:00:07.312) 0:00:21.221 ***** 2026-01-07 01:13:28.069184 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:13:28.069190 | orchestrator | 2026-01-07 01:13:28.069197 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-01-07 01:13:28.069204 | orchestrator | Wednesday 07 January 2026 01:09:13 +0000 (0:00:04.230) 0:00:25.452 ***** 2026-01-07 01:13:28.069211 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:13:28.069217 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:13:28.069254 | orchestrator | 2026-01-07 01:13:28.069262 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-07 01:13:28.069268 | orchestrator | Wednesday 07 January 2026 01:09:20 +0000 (0:00:07.198) 0:00:32.650 ***** 2026-01-07 01:13:28.069274 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-07 01:13:28.069281 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-07 01:13:28.069296 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-07 01:13:28.069303 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-07 01:13:28.069309 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-07 01:13:28.069316 | orchestrator | 2026-01-07 01:13:28.069323 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.069329 | orchestrator | Wednesday 07 January 2026 01:09:35 +0000 (0:00:14.711) 0:00:47.362 ***** 2026-01-07 01:13:28.069336 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:28.069357 | orchestrator | 2026-01-07 01:13:28.069363 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-07 01:13:28.069370 | orchestrator | Wednesday 07 January 2026 01:09:36 +0000 (0:00:00.598) 0:00:47.960 ***** 2026-01-07 01:13:28.069377 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069391 | orchestrator | 2026-01-07 01:13:28.069398 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-07 01:13:28.069405 | orchestrator | Wednesday 07 January 2026 01:09:41 +0000 (0:00:05.412) 0:00:53.373 ***** 2026-01-07 01:13:28.069411 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069452 | orchestrator | 2026-01-07 01:13:28.069460 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:13:28.069467 | orchestrator | Wednesday 07 January 2026 01:09:45 +0000 (0:00:03.973) 0:00:57.347 ***** 2026-01-07 01:13:28.069473 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.069479 | orchestrator | 2026-01-07 01:13:28.069485 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-07 01:13:28.069492 | orchestrator | Wednesday 07 January 2026 01:09:48 +0000 (0:00:02.865) 0:01:00.213 ***** 2026-01-07 01:13:28.069498 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:13:28.069505 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:13:28.069512 | orchestrator | 2026-01-07 01:13:28.069519 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-07 01:13:28.069526 | orchestrator | Wednesday 07 January 2026 01:09:57 +0000 (0:00:09.249) 0:01:09.462 ***** 2026-01-07 01:13:28.069532 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-07 01:13:28.069539 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-07 01:13:28.069547 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-07 01:13:28.069555 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-07 01:13:28.069562 | orchestrator | 2026-01-07 01:13:28.069569 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-07 01:13:28.069576 | orchestrator | Wednesday 07 January 2026 01:10:13 +0000 (0:00:15.870) 0:01:25.332 ***** 2026-01-07 01:13:28.069582 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069589 | orchestrator | 2026-01-07 01:13:28.069600 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-07 01:13:28.069607 | orchestrator | Wednesday 07 January 2026 01:10:18 +0000 (0:00:05.019) 0:01:30.352 ***** 2026-01-07 01:13:28.069614 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069621 | orchestrator | 2026-01-07 01:13:28.069628 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-07 01:13:28.069635 | orchestrator | Wednesday 07 January 2026 01:10:23 +0000 (0:00:04.606) 0:01:34.959 ***** 2026-01-07 01:13:28.069642 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.069648 | orchestrator | 2026-01-07 01:13:28.069655 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-07 01:13:28.069676 | orchestrator | Wednesday 07 January 2026 01:10:23 +0000 (0:00:00.225) 0:01:35.184 ***** 2026-01-07 01:13:28.069684 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.069691 | orchestrator | 2026-01-07 01:13:28.069698 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.069704 | orchestrator | Wednesday 07 January 2026 01:10:29 +0000 (0:00:05.767) 0:01:40.952 ***** 2026-01-07 01:13:28.069711 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-07 01:13:28.069719 | orchestrator | 2026-01-07 01:13:28.069725 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-07 01:13:28.069732 | orchestrator | Wednesday 07 January 2026 01:10:30 +0000 (0:00:01.035) 0:01:41.987 ***** 2026-01-07 01:13:28.069739 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069746 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069753 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069760 | orchestrator | 2026-01-07 01:13:28.069766 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-07 01:13:28.069772 | orchestrator | Wednesday 07 January 2026 01:10:35 +0000 (0:00:05.007) 0:01:46.994 ***** 2026-01-07 01:13:28.069778 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069784 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069790 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069795 | orchestrator | 2026-01-07 01:13:28.069801 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-07 01:13:28.069806 | orchestrator | Wednesday 07 January 2026 01:10:39 +0000 (0:00:04.525) 0:01:51.519 ***** 2026-01-07 01:13:28.069812 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069818 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069823 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069829 | orchestrator | 2026-01-07 01:13:28.069835 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-07 01:13:28.069840 | orchestrator | Wednesday 07 January 2026 01:10:40 +0000 (0:00:00.994) 0:01:52.514 ***** 2026-01-07 01:13:28.069846 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.069852 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.069859 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.069864 | orchestrator | 2026-01-07 01:13:28.069870 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-07 01:13:28.069876 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:03.095) 0:01:55.610 ***** 2026-01-07 01:13:28.069882 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069889 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069895 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069901 | orchestrator | 2026-01-07 01:13:28.069908 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-07 01:13:28.069914 | orchestrator | Wednesday 07 January 2026 01:10:45 +0000 (0:00:01.359) 0:01:56.970 ***** 2026-01-07 01:13:28.069921 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069927 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069934 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069941 | orchestrator | 2026-01-07 01:13:28.069948 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-07 01:13:28.069954 | orchestrator | Wednesday 07 January 2026 01:10:46 +0000 (0:00:01.155) 0:01:58.125 ***** 2026-01-07 01:13:28.069960 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.069967 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.069973 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.069980 | orchestrator | 2026-01-07 01:13:28.069986 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-07 01:13:28.069992 | orchestrator | Wednesday 07 January 2026 01:10:48 +0000 (0:00:02.095) 0:02:00.220 ***** 2026-01-07 01:13:28.069999 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.070005 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.070054 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.070065 | orchestrator | 2026-01-07 01:13:28.070072 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-07 01:13:28.070078 | orchestrator | Wednesday 07 January 2026 01:10:50 +0000 (0:00:01.777) 0:02:01.998 ***** 2026-01-07 01:13:28.070085 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070092 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.070099 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.070106 | orchestrator | 2026-01-07 01:13:28.070113 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-07 01:13:28.070120 | orchestrator | Wednesday 07 January 2026 01:10:50 +0000 (0:00:00.615) 0:02:02.614 ***** 2026-01-07 01:13:28.070128 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.070136 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070143 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.070150 | orchestrator | 2026-01-07 01:13:28.070157 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.070165 | orchestrator | Wednesday 07 January 2026 01:10:53 +0000 (0:00:02.769) 0:02:05.383 ***** 2026-01-07 01:13:28.070172 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:28.070180 | orchestrator | 2026-01-07 01:13:28.070187 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-07 01:13:28.070194 | orchestrator | Wednesday 07 January 2026 01:10:54 +0000 (0:00:01.019) 0:02:06.402 ***** 2026-01-07 01:13:28.070201 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070208 | orchestrator | 2026-01-07 01:13:28.070215 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:13:28.070239 | orchestrator | Wednesday 07 January 2026 01:10:58 +0000 (0:00:03.886) 0:02:10.289 ***** 2026-01-07 01:13:28.070246 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070252 | orchestrator | 2026-01-07 01:13:28.070258 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-07 01:13:28.070264 | orchestrator | Wednesday 07 January 2026 01:11:02 +0000 (0:00:04.085) 0:02:14.374 ***** 2026-01-07 01:13:28.070270 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:13:28.070276 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:13:28.070283 | orchestrator | 2026-01-07 01:13:28.070291 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-07 01:13:28.070307 | orchestrator | Wednesday 07 January 2026 01:11:10 +0000 (0:00:07.392) 0:02:21.767 ***** 2026-01-07 01:13:28.070315 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070322 | orchestrator | 2026-01-07 01:13:28.070329 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-07 01:13:28.070337 | orchestrator | Wednesday 07 January 2026 01:11:13 +0000 (0:00:03.574) 0:02:25.341 ***** 2026-01-07 01:13:28.070344 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:28.070351 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:13:28.070357 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:13:28.070364 | orchestrator | 2026-01-07 01:13:28.070372 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-07 01:13:28.070379 | orchestrator | Wednesday 07 January 2026 01:11:13 +0000 (0:00:00.337) 0:02:25.678 ***** 2026-01-07 01:13:28.070389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070546 | orchestrator | 2026-01-07 01:13:28.070553 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-07 01:13:28.070560 | orchestrator | Wednesday 07 January 2026 01:11:16 +0000 (0:00:02.266) 0:02:27.945 ***** 2026-01-07 01:13:28.070568 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.070575 | orchestrator | 2026-01-07 01:13:28.070582 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-07 01:13:28.070590 | orchestrator | Wednesday 07 January 2026 01:11:16 +0000 (0:00:00.150) 0:02:28.095 ***** 2026-01-07 01:13:28.070597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.070604 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.070613 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.070621 | orchestrator | 2026-01-07 01:13:28.070627 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-07 01:13:28.070633 | orchestrator | Wednesday 07 January 2026 01:11:16 +0000 (0:00:00.496) 0:02:28.591 ***** 2026-01-07 01:13:28.070642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.070653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.070663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.070684 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.070691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.070704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.070714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.070739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.070746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.070753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.070763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.070785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.070793 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.070800 | orchestrator | 2026-01-07 01:13:28.070807 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.070815 | orchestrator | Wednesday 07 January 2026 01:11:17 +0000 (0:00:00.682) 0:02:29.273 ***** 2026-01-07 01:13:28.070823 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:28.070830 | orchestrator | 2026-01-07 01:13:28.070838 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-07 01:13:28.070845 | orchestrator | Wednesday 07 January 2026 01:11:18 +0000 (0:00:00.533) 0:02:29.807 ***** 2026-01-07 01:13:28.070853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.070888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.070911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.070991 | orchestrator | 2026-01-07 01:13:28.070998 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-07 01:13:28.071006 | orchestrator | Wednesday 07 January 2026 01:11:22 +0000 (0:00:04.808) 0:02:34.615 ***** 2026-01-07 01:13:28.071024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071064 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.071072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071121 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.071128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071178 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.071185 | orchestrator | 2026-01-07 01:13:28.071193 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-07 01:13:28.071200 | orchestrator | Wednesday 07 January 2026 01:11:24 +0000 (0:00:01.349) 0:02:35.965 ***** 2026-01-07 01:13:28.071208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071297 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.071305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071351 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.071362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.071370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.071378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.071398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.071406 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.071413 | orchestrator | 2026-01-07 01:13:28.071421 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-07 01:13:28.071428 | orchestrator | Wednesday 07 January 2026 01:11:25 +0000 (0:00:01.047) 0:02:37.012 ***** 2026-01-07 01:13:28.071442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071568 | orchestrator | 2026-01-07 01:13:28.071574 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-07 01:13:28.071581 | orchestrator | Wednesday 07 January 2026 01:11:29 +0000 (0:00:04.340) 0:02:41.353 ***** 2026-01-07 01:13:28.071587 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:13:28.071594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:13:28.071601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:13:28.071613 | orchestrator | 2026-01-07 01:13:28.071619 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-07 01:13:28.071625 | orchestrator | Wednesday 07 January 2026 01:11:31 +0000 (0:00:01.699) 0:02:43.052 ***** 2026-01-07 01:13:28.071631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.071660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.071685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.071761 | orchestrator | 2026-01-07 01:13:28.071768 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-07 01:13:28.071777 | orchestrator | Wednesday 07 January 2026 01:11:47 +0000 (0:00:15.695) 0:02:58.748 ***** 2026-01-07 01:13:28.071784 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.071791 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.071798 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.071806 | orchestrator | 2026-01-07 01:13:28.071813 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-07 01:13:28.071820 | orchestrator | Wednesday 07 January 2026 01:11:48 +0000 (0:00:01.273) 0:03:00.021 ***** 2026-01-07 01:13:28.071828 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.071835 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.071843 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.071908 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.071919 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.071927 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.071935 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.071942 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.071949 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.071964 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:13:28.071971 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:13:28.071978 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:13:28.071985 | orchestrator | 2026-01-07 01:13:28.071991 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-07 01:13:28.071998 | orchestrator | Wednesday 07 January 2026 01:11:52 +0000 (0:00:04.383) 0:03:04.404 ***** 2026-01-07 01:13:28.072004 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072011 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072018 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072024 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072031 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072038 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072044 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072051 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072057 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072064 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072070 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072079 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072086 | orchestrator | 2026-01-07 01:13:28.072093 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-07 01:13:28.072100 | orchestrator | Wednesday 07 January 2026 01:11:58 +0000 (0:00:05.892) 0:03:10.297 ***** 2026-01-07 01:13:28.072107 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072113 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072119 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:13:28.072124 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072131 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072138 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:13:28.072144 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072151 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072158 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:13:28.072165 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072172 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072179 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:13:28.072185 | orchestrator | 2026-01-07 01:13:28.072192 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-01-07 01:13:28.072199 | orchestrator | Wednesday 07 January 2026 01:12:04 +0000 (0:00:05.593) 0:03:15.890 ***** 2026-01-07 01:13:28.072211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.072251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.072260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:13:28.072268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.072276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.072283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:13:28.072293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:28.072384 | orchestrator | 2026-01-07 01:13:28.072391 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-01-07 01:13:28.072398 | orchestrator | Wednesday 07 January 2026 01:12:07 +0000 (0:00:03.706) 0:03:19.597 ***** 2026-01-07 01:13:28.072405 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:13:28.072413 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:28.072419 | orchestrator | } 2026-01-07 01:13:28.072427 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:13:28.072433 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:28.072440 | orchestrator | } 2026-01-07 01:13:28.072447 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:13:28.072452 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:28.072459 | orchestrator | } 2026-01-07 01:13:28.072465 | orchestrator | 2026-01-07 01:13:28.072472 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:13:28.072479 | orchestrator | Wednesday 07 January 2026 01:12:08 +0000 (0:00:00.391) 0:03:19.988 ***** 2026-01-07 01:13:28.072486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.072493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.072506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.072538 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.072545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.072553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.072560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.072588 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.072600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:13:28.072607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:13:28.072615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:13:28.072633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:13:28.072641 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.072648 | orchestrator | 2026-01-07 01:13:28.072655 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:13:28.072662 | orchestrator | Wednesday 07 January 2026 01:12:09 +0000 (0:00:01.309) 0:03:21.297 ***** 2026-01-07 01:13:28.072668 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:28.072675 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:28.072682 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:28.072689 | orchestrator | 2026-01-07 01:13:28.072696 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-07 01:13:28.072703 | orchestrator | Wednesday 07 January 2026 01:12:09 +0000 (0:00:00.324) 0:03:21.622 ***** 2026-01-07 01:13:28.072709 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072717 | orchestrator | 2026-01-07 01:13:28.072723 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-07 01:13:28.072734 | orchestrator | Wednesday 07 January 2026 01:12:12 +0000 (0:00:02.515) 0:03:24.137 ***** 2026-01-07 01:13:28.072741 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072748 | orchestrator | 2026-01-07 01:13:28.072755 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-07 01:13:28.072762 | orchestrator | Wednesday 07 January 2026 01:12:14 +0000 (0:00:02.306) 0:03:26.444 ***** 2026-01-07 01:13:28.072768 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072774 | orchestrator | 2026-01-07 01:13:28.072781 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-07 01:13:28.072787 | orchestrator | Wednesday 07 January 2026 01:12:16 +0000 (0:00:02.226) 0:03:28.671 ***** 2026-01-07 01:13:28.072794 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072801 | orchestrator | 2026-01-07 01:13:28.072812 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-07 01:13:28.072819 | orchestrator | Wednesday 07 January 2026 01:12:19 +0000 (0:00:02.073) 0:03:30.744 ***** 2026-01-07 01:13:28.072826 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072833 | orchestrator | 2026-01-07 01:13:28.072840 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:13:28.072846 | orchestrator | Wednesday 07 January 2026 01:12:39 +0000 (0:00:20.145) 0:03:50.890 ***** 2026-01-07 01:13:28.072853 | orchestrator | 2026-01-07 01:13:28.072859 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:13:28.072865 | orchestrator | Wednesday 07 January 2026 01:12:39 +0000 (0:00:00.067) 0:03:50.957 ***** 2026-01-07 01:13:28.072872 | orchestrator | 2026-01-07 01:13:28.072879 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:13:28.072885 | orchestrator | Wednesday 07 January 2026 01:12:39 +0000 (0:00:00.065) 0:03:51.023 ***** 2026-01-07 01:13:28.072893 | orchestrator | 2026-01-07 01:13:28.072900 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-07 01:13:28.072907 | orchestrator | Wednesday 07 January 2026 01:12:39 +0000 (0:00:00.256) 0:03:51.280 ***** 2026-01-07 01:13:28.072914 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072921 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.072936 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.072943 | orchestrator | 2026-01-07 01:13:28.072950 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-07 01:13:28.072957 | orchestrator | Wednesday 07 January 2026 01:12:53 +0000 (0:00:13.606) 0:04:04.886 ***** 2026-01-07 01:13:28.072965 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.072972 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.072979 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.072986 | orchestrator | 2026-01-07 01:13:28.072993 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-07 01:13:28.073001 | orchestrator | Wednesday 07 January 2026 01:12:58 +0000 (0:00:05.381) 0:04:10.268 ***** 2026-01-07 01:13:28.073008 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.073016 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.073023 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.073031 | orchestrator | 2026-01-07 01:13:28.073038 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-07 01:13:28.073045 | orchestrator | Wednesday 07 January 2026 01:13:03 +0000 (0:00:04.946) 0:04:15.215 ***** 2026-01-07 01:13:28.073052 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.073060 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.073067 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.073074 | orchestrator | 2026-01-07 01:13:28.073082 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-07 01:13:28.073089 | orchestrator | Wednesday 07 January 2026 01:13:13 +0000 (0:00:09.828) 0:04:25.043 ***** 2026-01-07 01:13:28.073096 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:28.073104 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:28.073111 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:28.073118 | orchestrator | 2026-01-07 01:13:28.073126 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:13:28.073134 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:13:28.073142 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:13:28.073149 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:13:28.073157 | orchestrator | 2026-01-07 01:13:28.073164 | orchestrator | 2026-01-07 01:13:28.073171 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:13:28.073178 | orchestrator | Wednesday 07 January 2026 01:13:25 +0000 (0:00:11.952) 0:04:36.996 ***** 2026-01-07 01:13:28.073186 | orchestrator | =============================================================================== 2026-01-07 01:13:28.073193 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.15s 2026-01-07 01:13:28.073200 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.87s 2026-01-07 01:13:28.073208 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.70s 2026-01-07 01:13:28.073215 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.71s 2026-01-07 01:13:28.073257 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.61s 2026-01-07 01:13:28.073265 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.95s 2026-01-07 01:13:28.073272 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.83s 2026-01-07 01:13:28.073279 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.25s 2026-01-07 01:13:28.073286 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.39s 2026-01-07 01:13:28.073298 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.31s 2026-01-07 01:13:28.073314 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 7.20s 2026-01-07 01:13:28.073321 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.89s 2026-01-07 01:13:28.073328 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 5.89s 2026-01-07 01:13:28.073335 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.77s 2026-01-07 01:13:28.073341 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.59s 2026-01-07 01:13:28.073348 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.41s 2026-01-07 01:13:28.073360 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.38s 2026-01-07 01:13:28.073367 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.02s 2026-01-07 01:13:28.073373 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.01s 2026-01-07 01:13:28.073380 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 4.95s 2026-01-07 01:13:28.073386 | orchestrator | 2026-01-07 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:31.110403 | orchestrator | 2026-01-07 01:13:31 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:31.110789 | orchestrator | 2026-01-07 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:34.149008 | orchestrator | 2026-01-07 01:13:34 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:34.149063 | orchestrator | 2026-01-07 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:37.201618 | orchestrator | 2026-01-07 01:13:37 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:37.202679 | orchestrator | 2026-01-07 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:40.246623 | orchestrator | 2026-01-07 01:13:40 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:40.246675 | orchestrator | 2026-01-07 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:43.286399 | orchestrator | 2026-01-07 01:13:43 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:43.286485 | orchestrator | 2026-01-07 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:46.329057 | orchestrator | 2026-01-07 01:13:46 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:46.329106 | orchestrator | 2026-01-07 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:49.374631 | orchestrator | 2026-01-07 01:13:49 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:49.374721 | orchestrator | 2026-01-07 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:52.423036 | orchestrator | 2026-01-07 01:13:52 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state STARTED 2026-01-07 01:13:52.423149 | orchestrator | 2026-01-07 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:55.473401 | orchestrator | 2026-01-07 01:13:55 | INFO  | Task 9942dcc8-c2ed-448b-8f35-36d2614192c9 is in state SUCCESS 2026-01-07 01:13:55.474942 | orchestrator | 2026-01-07 01:13:55.474993 | orchestrator | 2026-01-07 01:13:55.475003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:13:55.475011 | orchestrator | 2026-01-07 01:13:55.475017 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-07 01:13:55.475024 | orchestrator | Wednesday 07 January 2026 01:04:25 +0000 (0:00:00.407) 0:00:00.407 ***** 2026-01-07 01:13:55.475036 | orchestrator | changed: [testbed-manager] 2026-01-07 01:13:55.475044 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475065 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.475069 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.475073 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.475077 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.475081 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.475085 | orchestrator | 2026-01-07 01:13:55.475089 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:13:55.475092 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:01.462) 0:00:01.869 ***** 2026-01-07 01:13:55.475096 | orchestrator | changed: [testbed-manager] 2026-01-07 01:13:55.475100 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475104 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.475108 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.475111 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.475116 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.475119 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.475123 | orchestrator | 2026-01-07 01:13:55.475127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:13:55.475131 | orchestrator | Wednesday 07 January 2026 01:04:27 +0000 (0:00:00.995) 0:00:02.865 ***** 2026-01-07 01:13:55.475134 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-07 01:13:55.475138 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:13:55.475142 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:13:55.475153 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:13:55.475157 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-07 01:13:55.475160 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-07 01:13:55.475164 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-07 01:13:55.475168 | orchestrator | 2026-01-07 01:13:55.475172 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-07 01:13:55.475175 | orchestrator | 2026-01-07 01:13:55.475179 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:13:55.475183 | orchestrator | Wednesday 07 January 2026 01:04:29 +0000 (0:00:01.509) 0:00:04.375 ***** 2026-01-07 01:13:55.475186 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.475190 | orchestrator | 2026-01-07 01:13:55.475194 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-07 01:13:55.475198 | orchestrator | Wednesday 07 January 2026 01:04:30 +0000 (0:00:01.297) 0:00:05.672 ***** 2026-01-07 01:13:55.475202 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-07 01:13:55.475206 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-07 01:13:55.475209 | orchestrator | 2026-01-07 01:13:55.475213 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-07 01:13:55.475217 | orchestrator | Wednesday 07 January 2026 01:04:35 +0000 (0:00:05.353) 0:00:11.026 ***** 2026-01-07 01:13:55.475220 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:13:55.475224 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:13:55.475228 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475232 | orchestrator | 2026-01-07 01:13:55.475235 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:13:55.475239 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:04.382) 0:00:15.408 ***** 2026-01-07 01:13:55.475243 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475246 | orchestrator | 2026-01-07 01:13:55.475250 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-07 01:13:55.475254 | orchestrator | Wednesday 07 January 2026 01:04:40 +0000 (0:00:00.618) 0:00:16.027 ***** 2026-01-07 01:13:55.475258 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475261 | orchestrator | 2026-01-07 01:13:55.475265 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-07 01:13:55.475272 | orchestrator | Wednesday 07 January 2026 01:04:41 +0000 (0:00:01.174) 0:00:17.202 ***** 2026-01-07 01:13:55.475275 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475279 | orchestrator | 2026-01-07 01:13:55.475283 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:13:55.475287 | orchestrator | Wednesday 07 January 2026 01:04:44 +0000 (0:00:02.826) 0:00:20.028 ***** 2026-01-07 01:13:55.475290 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475306 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475313 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475320 | orchestrator | 2026-01-07 01:13:55.475327 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:13:55.475333 | orchestrator | Wednesday 07 January 2026 01:04:45 +0000 (0:00:00.252) 0:00:20.281 ***** 2026-01-07 01:13:55.475339 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.475345 | orchestrator | 2026-01-07 01:13:55.475352 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-07 01:13:55.475359 | orchestrator | Wednesday 07 January 2026 01:05:16 +0000 (0:00:31.307) 0:00:51.588 ***** 2026-01-07 01:13:55.475365 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475372 | orchestrator | 2026-01-07 01:13:55.475376 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:13:55.475379 | orchestrator | Wednesday 07 January 2026 01:05:31 +0000 (0:00:15.601) 0:01:07.190 ***** 2026-01-07 01:13:55.475383 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.475387 | orchestrator | 2026-01-07 01:13:55.475390 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:13:55.475394 | orchestrator | Wednesday 07 January 2026 01:05:44 +0000 (0:00:12.657) 0:01:19.847 ***** 2026-01-07 01:13:55.475406 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.475410 | orchestrator | 2026-01-07 01:13:55.475414 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-07 01:13:55.475418 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:01.466) 0:01:21.314 ***** 2026-01-07 01:13:55.475422 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475425 | orchestrator | 2026-01-07 01:13:55.475429 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:13:55.475433 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:00.703) 0:01:22.018 ***** 2026-01-07 01:13:55.475437 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.475441 | orchestrator | 2026-01-07 01:13:55.475444 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:13:55.475448 | orchestrator | Wednesday 07 January 2026 01:05:47 +0000 (0:00:00.586) 0:01:22.604 ***** 2026-01-07 01:13:55.475452 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.475510 | orchestrator | 2026-01-07 01:13:55.475523 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:13:55.475547 | orchestrator | Wednesday 07 January 2026 01:06:05 +0000 (0:00:18.277) 0:01:40.881 ***** 2026-01-07 01:13:55.475553 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475559 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475564 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475570 | orchestrator | 2026-01-07 01:13:55.475576 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-07 01:13:55.475582 | orchestrator | 2026-01-07 01:13:55.475642 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:13:55.475649 | orchestrator | Wednesday 07 January 2026 01:06:05 +0000 (0:00:00.246) 0:01:41.128 ***** 2026-01-07 01:13:55.475653 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.475658 | orchestrator | 2026-01-07 01:13:55.475666 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-07 01:13:55.475670 | orchestrator | Wednesday 07 January 2026 01:06:06 +0000 (0:00:00.423) 0:01:41.551 ***** 2026-01-07 01:13:55.475679 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475684 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475688 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475693 | orchestrator | 2026-01-07 01:13:55.475697 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-07 01:13:55.475702 | orchestrator | Wednesday 07 January 2026 01:06:08 +0000 (0:00:01.733) 0:01:43.285 ***** 2026-01-07 01:13:55.475706 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475710 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475715 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475719 | orchestrator | 2026-01-07 01:13:55.475723 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:13:55.475728 | orchestrator | Wednesday 07 January 2026 01:06:09 +0000 (0:00:01.928) 0:01:45.214 ***** 2026-01-07 01:13:55.475734 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475745 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475751 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475757 | orchestrator | 2026-01-07 01:13:55.475764 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:13:55.475771 | orchestrator | Wednesday 07 January 2026 01:06:10 +0000 (0:00:00.359) 0:01:45.573 ***** 2026-01-07 01:13:55.475778 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:13:55.475786 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475790 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:13:55.475796 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475802 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 01:13:55.475809 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-07 01:13:55.475815 | orchestrator | 2026-01-07 01:13:55.475822 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:13:55.475829 | orchestrator | Wednesday 07 January 2026 01:06:23 +0000 (0:00:13.131) 0:01:58.705 ***** 2026-01-07 01:13:55.475836 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475843 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475849 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475853 | orchestrator | 2026-01-07 01:13:55.475857 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:13:55.475860 | orchestrator | Wednesday 07 January 2026 01:06:23 +0000 (0:00:00.313) 0:01:59.018 ***** 2026-01-07 01:13:55.475864 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 01:13:55.475868 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.475872 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:13:55.475875 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475879 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:13:55.475883 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475886 | orchestrator | 2026-01-07 01:13:55.475890 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:13:55.475894 | orchestrator | Wednesday 07 January 2026 01:06:24 +0000 (0:00:00.583) 0:01:59.602 ***** 2026-01-07 01:13:55.475898 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475901 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475905 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475909 | orchestrator | 2026-01-07 01:13:55.475913 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-07 01:13:55.475917 | orchestrator | Wednesday 07 January 2026 01:06:24 +0000 (0:00:00.531) 0:02:00.134 ***** 2026-01-07 01:13:55.475921 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475924 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475928 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475932 | orchestrator | 2026-01-07 01:13:55.475936 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-07 01:13:55.475939 | orchestrator | Wednesday 07 January 2026 01:06:25 +0000 (0:00:00.812) 0:02:00.946 ***** 2026-01-07 01:13:55.475947 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475951 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475959 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.475963 | orchestrator | 2026-01-07 01:13:55.475967 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-07 01:13:55.475971 | orchestrator | Wednesday 07 January 2026 01:06:27 +0000 (0:00:02.001) 0:02:02.948 ***** 2026-01-07 01:13:55.475975 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.475979 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.475982 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.475986 | orchestrator | 2026-01-07 01:13:55.475990 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:13:55.475994 | orchestrator | Wednesday 07 January 2026 01:06:49 +0000 (0:00:21.825) 0:02:24.773 ***** 2026-01-07 01:13:55.475998 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476004 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476010 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.476016 | orchestrator | 2026-01-07 01:13:55.476022 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:13:55.476029 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:12.046) 0:02:36.819 ***** 2026-01-07 01:13:55.476035 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:13:55.476041 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476047 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476054 | orchestrator | 2026-01-07 01:13:55.476061 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-07 01:13:55.476065 | orchestrator | Wednesday 07 January 2026 01:07:02 +0000 (0:00:00.836) 0:02:37.656 ***** 2026-01-07 01:13:55.476069 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476072 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476076 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.476080 | orchestrator | 2026-01-07 01:13:55.476084 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-07 01:13:55.476087 | orchestrator | Wednesday 07 January 2026 01:07:16 +0000 (0:00:13.882) 0:02:51.539 ***** 2026-01-07 01:13:55.476094 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.476098 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476101 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476105 | orchestrator | 2026-01-07 01:13:55.476109 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:13:55.476113 | orchestrator | Wednesday 07 January 2026 01:07:17 +0000 (0:00:01.015) 0:02:52.554 ***** 2026-01-07 01:13:55.476116 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.476120 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476124 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476128 | orchestrator | 2026-01-07 01:13:55.476131 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-07 01:13:55.476135 | orchestrator | 2026-01-07 01:13:55.476139 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:13:55.476143 | orchestrator | Wednesday 07 January 2026 01:07:17 +0000 (0:00:00.553) 0:02:53.108 ***** 2026-01-07 01:13:55.476146 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.476153 | orchestrator | 2026-01-07 01:13:55.476162 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-01-07 01:13:55.476169 | orchestrator | Wednesday 07 January 2026 01:07:18 +0000 (0:00:00.554) 0:02:53.662 ***** 2026-01-07 01:13:55.476176 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-07 01:13:55.476210 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-07 01:13:55.476224 | orchestrator | 2026-01-07 01:13:55.476229 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-01-07 01:13:55.476232 | orchestrator | Wednesday 07 January 2026 01:07:21 +0000 (0:00:02.965) 0:02:56.627 ***** 2026-01-07 01:13:55.476244 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-07 01:13:55.476249 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-07 01:13:55.476261 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-07 01:13:55.476265 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-07 01:13:55.476269 | orchestrator | 2026-01-07 01:13:55.476278 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-07 01:13:55.476282 | orchestrator | Wednesday 07 January 2026 01:07:27 +0000 (0:00:06.291) 0:03:02.919 ***** 2026-01-07 01:13:55.476286 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:13:55.476290 | orchestrator | 2026-01-07 01:13:55.476353 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-07 01:13:55.476361 | orchestrator | Wednesday 07 January 2026 01:07:30 +0000 (0:00:02.831) 0:03:05.750 ***** 2026-01-07 01:13:55.476370 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:13:55.476377 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-07 01:13:55.476384 | orchestrator | 2026-01-07 01:13:55.476390 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-07 01:13:55.476397 | orchestrator | Wednesday 07 January 2026 01:07:34 +0000 (0:00:03.743) 0:03:09.494 ***** 2026-01-07 01:13:55.476403 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:13:55.476409 | orchestrator | 2026-01-07 01:13:55.476416 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-01-07 01:13:55.476423 | orchestrator | Wednesday 07 January 2026 01:07:37 +0000 (0:00:02.777) 0:03:12.271 ***** 2026-01-07 01:13:55.476430 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-07 01:13:55.476437 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-07 01:13:55.476443 | orchestrator | 2026-01-07 01:13:55.476450 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:13:55.476462 | orchestrator | Wednesday 07 January 2026 01:07:43 +0000 (0:00:06.711) 0:03:18.982 ***** 2026-01-07 01:13:55.476472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476636 | orchestrator | 2026-01-07 01:13:55.476645 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-07 01:13:55.476652 | orchestrator | Wednesday 07 January 2026 01:07:45 +0000 (0:00:01.714) 0:03:20.697 ***** 2026-01-07 01:13:55.476659 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.476665 | orchestrator | 2026-01-07 01:13:55.476672 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-07 01:13:55.476678 | orchestrator | Wednesday 07 January 2026 01:07:45 +0000 (0:00:00.130) 0:03:20.827 ***** 2026-01-07 01:13:55.476685 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.476691 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476697 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476703 | orchestrator | 2026-01-07 01:13:55.476709 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-07 01:13:55.476715 | orchestrator | Wednesday 07 January 2026 01:07:46 +0000 (0:00:00.489) 0:03:21.316 ***** 2026-01-07 01:13:55.476722 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:13:55.476728 | orchestrator | 2026-01-07 01:13:55.476734 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-07 01:13:55.476741 | orchestrator | Wednesday 07 January 2026 01:07:46 +0000 (0:00:00.689) 0:03:22.006 ***** 2026-01-07 01:13:55.476747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.476753 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.476760 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.476771 | orchestrator | 2026-01-07 01:13:55.476778 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:13:55.476784 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:00.339) 0:03:22.345 ***** 2026-01-07 01:13:55.476790 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.476797 | orchestrator | 2026-01-07 01:13:55.476803 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:13:55.476812 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:00.548) 0:03:22.894 ***** 2026-01-07 01:13:55.476820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.476883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.476908 | orchestrator | 2026-01-07 01:13:55.476914 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:13:55.476923 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:03.685) 0:03:26.580 ***** 2026-01-07 01:13:55.476930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.476938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.476945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.476952 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.477335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.477377 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.477384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.477430 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.477437 | orchestrator | 2026-01-07 01:13:55.477443 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:13:55.477450 | orchestrator | Wednesday 07 January 2026 01:07:52 +0000 (0:00:00.743) 0:03:27.323 ***** 2026-01-07 01:13:55.477457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.477495 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.477501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.477524 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.477531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.477567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.477574 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.477580 | orchestrator | 2026-01-07 01:13:55.477586 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-07 01:13:55.477593 | orchestrator | Wednesday 07 January 2026 01:07:52 +0000 (0:00:00.912) 0:03:28.236 ***** 2026-01-07 01:13:55.477599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.477688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.477700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.477820 | orchestrator | 2026-01-07 01:13:55.477827 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-07 01:13:55.477834 | orchestrator | Wednesday 07 January 2026 01:07:56 +0000 (0:00:03.252) 0:03:31.488 ***** 2026-01-07 01:13:55.477841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.477848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478343 | orchestrator | 2026-01-07 01:13:55.478349 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-07 01:13:55.478356 | orchestrator | Wednesday 07 January 2026 01:08:03 +0000 (0:00:07.577) 0:03:39.066 ***** 2026-01-07 01:13:55.478363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.478408 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.478418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.478443 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.478466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.478483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.478490 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.478497 | orchestrator | 2026-01-07 01:13:55.478503 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-07 01:13:55.478509 | orchestrator | Wednesday 07 January 2026 01:08:04 +0000 (0:00:00.687) 0:03:39.753 ***** 2026-01-07 01:13:55.478515 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.478522 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.478528 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.478534 | orchestrator | 2026-01-07 01:13:55.478540 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-01-07 01:13:55.478547 | orchestrator | Wednesday 07 January 2026 01:08:05 +0000 (0:00:00.631) 0:03:40.385 ***** 2026-01-07 01:13:55.478553 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.478560 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.478570 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.478577 | orchestrator | 2026-01-07 01:13:55.478583 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-01-07 01:13:55.478589 | orchestrator | Wednesday 07 January 2026 01:08:06 +0000 (0:00:00.949) 0:03:41.334 ***** 2026-01-07 01:13:55.478596 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-01-07 01:13:55.478603 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-07 01:13:55.478609 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.478615 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-01-07 01:13:55.478621 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-07 01:13:55.478627 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.478633 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-01-07 01:13:55.478639 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-07 01:13:55.478646 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.478652 | orchestrator | 2026-01-07 01:13:55.478658 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-01-07 01:13:55.478664 | orchestrator | Wednesday 07 January 2026 01:08:06 +0000 (0:00:00.578) 0:03:41.912 ***** 2026-01-07 01:13:55.478671 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774'}) 2026-01-07 01:13:55.478677 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775'}) 2026-01-07 01:13:55.478684 | orchestrator | 2026-01-07 01:13:55.478691 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-01-07 01:13:55.478697 | orchestrator | Wednesday 07 January 2026 01:08:07 +0000 (0:00:01.306) 0:03:43.219 ***** 2026-01-07 01:13:55.478703 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.478710 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.478716 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.478722 | orchestrator | 2026-01-07 01:13:55.478729 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-01-07 01:13:55.478735 | orchestrator | Wednesday 07 January 2026 01:08:10 +0000 (0:00:02.414) 0:03:45.633 ***** 2026-01-07 01:13:55.478741 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.478747 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.478753 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.478759 | orchestrator | 2026-01-07 01:13:55.478766 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-01-07 01:13:55.478772 | orchestrator | Wednesday 07 January 2026 01:08:12 +0000 (0:00:02.066) 0:03:47.700 ***** 2026-01-07 01:13:55.478796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-07 01:13:55.478870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.478892 | orchestrator | 2026-01-07 01:13:55.478898 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-01-07 01:13:55.478904 | orchestrator | Wednesday 07 January 2026 01:08:15 +0000 (0:00:02.553) 0:03:50.254 ***** 2026-01-07 01:13:55.478928 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:13:55.478935 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.478942 | orchestrator | } 2026-01-07 01:13:55.478948 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:13:55.478955 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.478961 | orchestrator | } 2026-01-07 01:13:55.478968 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:13:55.478974 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.478980 | orchestrator | } 2026-01-07 01:13:55.478987 | orchestrator | 2026-01-07 01:13:55.478993 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:13:55.478999 | orchestrator | Wednesday 07 January 2026 01:08:15 +0000 (0:00:00.552) 0:03:50.806 ***** 2026-01-07 01:13:55.479008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.479037 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.479085 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-07 01:13:55.479123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.479134 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479140 | orchestrator | 2026-01-07 01:13:55.479147 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:13:55.479151 | orchestrator | Wednesday 07 January 2026 01:08:16 +0000 (0:00:00.941) 0:03:51.747 ***** 2026-01-07 01:13:55.479155 | orchestrator | 2026-01-07 01:13:55.479159 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:13:55.479163 | orchestrator | Wednesday 07 January 2026 01:08:16 +0000 (0:00:00.127) 0:03:51.875 ***** 2026-01-07 01:13:55.479167 | orchestrator | 2026-01-07 01:13:55.479170 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:13:55.479174 | orchestrator | Wednesday 07 January 2026 01:08:16 +0000 (0:00:00.120) 0:03:51.995 ***** 2026-01-07 01:13:55.479178 | orchestrator | 2026-01-07 01:13:55.479182 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-07 01:13:55.479186 | orchestrator | Wednesday 07 January 2026 01:08:17 +0000 (0:00:00.271) 0:03:52.266 ***** 2026-01-07 01:13:55.479189 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.479193 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.479197 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.479201 | orchestrator | 2026-01-07 01:13:55.479204 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-07 01:13:55.479211 | orchestrator | Wednesday 07 January 2026 01:08:35 +0000 (0:00:18.951) 0:04:11.218 ***** 2026-01-07 01:13:55.479215 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.479219 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.479222 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.479226 | orchestrator | 2026-01-07 01:13:55.479230 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-01-07 01:13:55.479234 | orchestrator | Wednesday 07 January 2026 01:08:46 +0000 (0:00:10.556) 0:04:21.774 ***** 2026-01-07 01:13:55.479239 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.479245 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.479251 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.479257 | orchestrator | 2026-01-07 01:13:55.479263 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-07 01:13:55.479270 | orchestrator | 2026-01-07 01:13:55.479276 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:13:55.479282 | orchestrator | Wednesday 07 January 2026 01:08:55 +0000 (0:00:09.278) 0:04:31.053 ***** 2026-01-07 01:13:55.479289 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.479309 | orchestrator | 2026-01-07 01:13:55.479314 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:13:55.479320 | orchestrator | Wednesday 07 January 2026 01:08:57 +0000 (0:00:01.321) 0:04:32.375 ***** 2026-01-07 01:13:55.479325 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479330 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.479335 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.479341 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479346 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479351 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479356 | orchestrator | 2026-01-07 01:13:55.479362 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-01-07 01:13:55.479366 | orchestrator | Wednesday 07 January 2026 01:08:57 +0000 (0:00:00.820) 0:04:33.195 ***** 2026-01-07 01:13:55.479370 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.479374 | orchestrator | 2026-01-07 01:13:55.479377 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-01-07 01:13:55.479381 | orchestrator | Wednesday 07 January 2026 01:09:19 +0000 (0:00:21.942) 0:04:55.138 ***** 2026-01-07 01:13:55.479389 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:13:55.479393 | orchestrator | 2026-01-07 01:13:55.479397 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-01-07 01:13:55.479401 | orchestrator | Wednesday 07 January 2026 01:09:21 +0000 (0:00:01.462) 0:04:56.600 ***** 2026-01-07 01:13:55.479404 | orchestrator | included: service-image-info for testbed-node-3 2026-01-07 01:13:55.479408 | orchestrator | 2026-01-07 01:13:55.479412 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-01-07 01:13:55.479416 | orchestrator | Wednesday 07 January 2026 01:09:22 +0000 (0:00:00.741) 0:04:57.341 ***** 2026-01-07 01:13:55.479420 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:13:55.479423 | orchestrator | 2026-01-07 01:13:55.479427 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-07 01:13:55.479431 | orchestrator | Wednesday 07 January 2026 01:09:25 +0000 (0:00:03.096) 0:05:00.438 ***** 2026-01-07 01:13:55.479435 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:13:55.479439 | orchestrator | 2026-01-07 01:13:55.479442 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-01-07 01:13:55.479446 | orchestrator | Wednesday 07 January 2026 01:09:27 +0000 (0:00:02.064) 0:05:02.503 ***** 2026-01-07 01:13:55.479450 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479454 | orchestrator | 2026-01-07 01:13:55.479458 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-07 01:13:55.479462 | orchestrator | Wednesday 07 January 2026 01:09:29 +0000 (0:00:02.123) 0:05:04.626 ***** 2026-01-07 01:13:55.479465 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479469 | orchestrator | 2026-01-07 01:13:55.479473 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-01-07 01:13:55.479493 | orchestrator | Wednesday 07 January 2026 01:09:31 +0000 (0:00:01.741) 0:05:06.367 ***** 2026-01-07 01:13:55.479498 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 01:13:55.479501 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 01:13:55.479505 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 01:13:55.479509 | orchestrator | 2026-01-07 01:13:55.479513 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-01-07 01:13:55.479516 | orchestrator | Wednesday 07 January 2026 01:09:40 +0000 (0:00:09.477) 0:05:15.845 ***** 2026-01-07 01:13:55.479520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:13:55.479524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:13:55.479540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:13:55.479544 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479547 | orchestrator | 2026-01-07 01:13:55.479551 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-01-07 01:13:55.479555 | orchestrator | Wednesday 07 January 2026 01:09:45 +0000 (0:00:05.049) 0:05:20.894 ***** 2026-01-07 01:13:55.479559 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-01-07 01:13:55.479566 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-01-07 01:13:55.479571 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-01-07 01:13:55.479578 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479581 | orchestrator | 2026-01-07 01:13:55.479585 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-07 01:13:55.479589 | orchestrator | Wednesday 07 January 2026 01:09:49 +0000 (0:00:03.367) 0:05:24.262 ***** 2026-01-07 01:13:55.479593 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479596 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479600 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479604 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:13:55.479608 | orchestrator | 2026-01-07 01:13:55.479612 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 01:13:55.479615 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:01.007) 0:05:25.270 ***** 2026-01-07 01:13:55.479619 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:13:55.479623 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:13:55.479627 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:13:55.479631 | orchestrator | 2026-01-07 01:13:55.479634 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 01:13:55.479638 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:00.725) 0:05:25.995 ***** 2026-01-07 01:13:55.479642 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:13:55.479646 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:13:55.479650 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:13:55.479653 | orchestrator | 2026-01-07 01:13:55.479657 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 01:13:55.479661 | orchestrator | Wednesday 07 January 2026 01:09:52 +0000 (0:00:01.269) 0:05:27.265 ***** 2026-01-07 01:13:55.479665 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-07 01:13:55.479668 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.479672 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-07 01:13:55.479676 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.479680 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-07 01:13:55.479683 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.479687 | orchestrator | 2026-01-07 01:13:55.479691 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-07 01:13:55.479695 | orchestrator | Wednesday 07 January 2026 01:09:52 +0000 (0:00:00.732) 0:05:27.998 ***** 2026-01-07 01:13:55.479698 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:13:55.479702 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:13:55.479706 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479710 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:13:55.479713 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:13:55.479717 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479721 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:13:55.479725 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:13:55.479742 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:13:55.479746 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:13:55.479750 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479754 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:13:55.479758 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:13:55.479761 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:13:55.479770 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:13:55.479774 | orchestrator | 2026-01-07 01:13:55.479778 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-07 01:13:55.479782 | orchestrator | Wednesday 07 January 2026 01:09:53 +0000 (0:00:01.144) 0:05:29.143 ***** 2026-01-07 01:13:55.479786 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479789 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479793 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479797 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.479801 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.479804 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.479808 | orchestrator | 2026-01-07 01:13:55.479812 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-07 01:13:55.479815 | orchestrator | Wednesday 07 January 2026 01:09:54 +0000 (0:00:01.094) 0:05:30.238 ***** 2026-01-07 01:13:55.479819 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.479823 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.479827 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.479830 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.479834 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.479838 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.479841 | orchestrator | 2026-01-07 01:13:55.479848 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:13:55.479855 | orchestrator | Wednesday 07 January 2026 01:09:56 +0000 (0:00:01.453) 0:05:31.691 ***** 2026-01-07 01:13:55.479862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.479994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480013 | orchestrator | 2026-01-07 01:13:55.480019 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:13:55.480026 | orchestrator | Wednesday 07 January 2026 01:09:59 +0000 (0:00:02.823) 0:05:34.514 ***** 2026-01-07 01:13:55.480033 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:13:55.480037 | orchestrator | 2026-01-07 01:13:55.480041 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:13:55.480045 | orchestrator | Wednesday 07 January 2026 01:10:00 +0000 (0:00:01.279) 0:05:35.794 ***** 2026-01-07 01:13:55.480063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.480165 | orchestrator | 2026-01-07 01:13:55.480168 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:13:55.480172 | orchestrator | Wednesday 07 January 2026 01:10:04 +0000 (0:00:03.524) 0:05:39.318 ***** 2026-01-07 01:13:55.480176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480213 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480224 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480251 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480265 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.480269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480280 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.480284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480316 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.480320 | orchestrator | 2026-01-07 01:13:55.480324 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:13:55.480328 | orchestrator | Wednesday 07 January 2026 01:10:06 +0000 (0:00:02.141) 0:05:41.460 ***** 2026-01-07 01:13:55.480332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.480347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.480386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480391 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480401 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480409 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480433 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.480437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480451 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.480456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.480460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.480463 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.480467 | orchestrator | 2026-01-07 01:13:55.480471 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:13:55.480475 | orchestrator | Wednesday 07 January 2026 01:10:08 +0000 (0:00:02.239) 0:05:43.699 ***** 2026-01-07 01:13:55.480479 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.480483 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.480487 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.480491 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:13:55.480494 | orchestrator | 2026-01-07 01:13:55.480498 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-07 01:13:55.480502 | orchestrator | Wednesday 07 January 2026 01:10:09 +0000 (0:00:00.875) 0:05:44.575 ***** 2026-01-07 01:13:55.480511 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:13:55.480515 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:13:55.480519 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:13:55.480523 | orchestrator | 2026-01-07 01:13:55.480526 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-07 01:13:55.480531 | orchestrator | Wednesday 07 January 2026 01:10:10 +0000 (0:00:01.122) 0:05:45.698 ***** 2026-01-07 01:13:55.480535 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:13:55.480539 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:13:55.480542 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:13:55.480546 | orchestrator | 2026-01-07 01:13:55.480550 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-07 01:13:55.480566 | orchestrator | Wednesday 07 January 2026 01:10:11 +0000 (0:00:00.978) 0:05:46.676 ***** 2026-01-07 01:13:55.480570 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:13:55.480575 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:13:55.480578 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:13:55.480582 | orchestrator | 2026-01-07 01:13:55.480586 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-07 01:13:55.480590 | orchestrator | Wednesday 07 January 2026 01:10:11 +0000 (0:00:00.510) 0:05:47.187 ***** 2026-01-07 01:13:55.480594 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:13:55.480598 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:13:55.480602 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:13:55.480605 | orchestrator | 2026-01-07 01:13:55.480609 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-07 01:13:55.480617 | orchestrator | Wednesday 07 January 2026 01:10:12 +0000 (0:00:00.497) 0:05:47.684 ***** 2026-01-07 01:13:55.480621 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:13:55.480625 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:13:55.480629 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:13:55.480633 | orchestrator | 2026-01-07 01:13:55.480636 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-07 01:13:55.480640 | orchestrator | Wednesday 07 January 2026 01:10:13 +0000 (0:00:01.288) 0:05:48.973 ***** 2026-01-07 01:13:55.480644 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:13:55.480648 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:13:55.480652 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:13:55.480656 | orchestrator | 2026-01-07 01:13:55.480660 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-07 01:13:55.480663 | orchestrator | Wednesday 07 January 2026 01:10:14 +0000 (0:00:01.134) 0:05:50.107 ***** 2026-01-07 01:13:55.480667 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:13:55.480673 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:13:55.480677 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:13:55.480681 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-07 01:13:55.480685 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-07 01:13:55.480689 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-07 01:13:55.480693 | orchestrator | 2026-01-07 01:13:55.480696 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-07 01:13:55.480700 | orchestrator | Wednesday 07 January 2026 01:10:18 +0000 (0:00:03.710) 0:05:53.817 ***** 2026-01-07 01:13:55.480704 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480708 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480712 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480715 | orchestrator | 2026-01-07 01:13:55.480719 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-07 01:13:55.480723 | orchestrator | Wednesday 07 January 2026 01:10:18 +0000 (0:00:00.297) 0:05:54.115 ***** 2026-01-07 01:13:55.480727 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480731 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480735 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480739 | orchestrator | 2026-01-07 01:13:55.480742 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-07 01:13:55.480746 | orchestrator | Wednesday 07 January 2026 01:10:19 +0000 (0:00:00.471) 0:05:54.587 ***** 2026-01-07 01:13:55.480750 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.480754 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.480758 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.480762 | orchestrator | 2026-01-07 01:13:55.480766 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-07 01:13:55.480770 | orchestrator | Wednesday 07 January 2026 01:10:20 +0000 (0:00:01.210) 0:05:55.798 ***** 2026-01-07 01:13:55.480774 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-07 01:13:55.480778 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-07 01:13:55.480782 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-07 01:13:55.480787 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-07 01:13:55.480794 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-07 01:13:55.480798 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-07 01:13:55.480801 | orchestrator | 2026-01-07 01:13:55.480805 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-07 01:13:55.480809 | orchestrator | Wednesday 07 January 2026 01:10:23 +0000 (0:00:03.051) 0:05:58.849 ***** 2026-01-07 01:13:55.480813 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:13:55.480817 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:13:55.480834 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:13:55.480838 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:13:55.480842 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.480846 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:13:55.480850 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.480854 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:13:55.480858 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.480861 | orchestrator | 2026-01-07 01:13:55.480868 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-07 01:13:55.480875 | orchestrator | Wednesday 07 January 2026 01:10:27 +0000 (0:00:03.507) 0:06:02.356 ***** 2026-01-07 01:13:55.480881 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480887 | orchestrator | 2026-01-07 01:13:55.480894 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-07 01:13:55.480900 | orchestrator | Wednesday 07 January 2026 01:10:27 +0000 (0:00:00.128) 0:06:02.485 ***** 2026-01-07 01:13:55.480906 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480913 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480920 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480927 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.480931 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.480935 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.480939 | orchestrator | 2026-01-07 01:13:55.480942 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-07 01:13:55.480946 | orchestrator | Wednesday 07 January 2026 01:10:28 +0000 (0:00:00.849) 0:06:03.334 ***** 2026-01-07 01:13:55.480950 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:13:55.480954 | orchestrator | 2026-01-07 01:13:55.480958 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-07 01:13:55.480961 | orchestrator | Wednesday 07 January 2026 01:10:28 +0000 (0:00:00.694) 0:06:04.029 ***** 2026-01-07 01:13:55.480965 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.480971 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.480975 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.480979 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.480983 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.480986 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.480990 | orchestrator | 2026-01-07 01:13:55.480994 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-07 01:13:55.480998 | orchestrator | Wednesday 07 January 2026 01:10:29 +0000 (0:00:00.553) 0:06:04.582 ***** 2026-01-07 01:13:55.481002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481101 | orchestrator | 2026-01-07 01:13:55.481105 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-07 01:13:55.481109 | orchestrator | Wednesday 07 January 2026 01:10:33 +0000 (0:00:03.993) 0:06:08.576 ***** 2026-01-07 01:13:55.481124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481205 | orchestrator | 2026-01-07 01:13:55.481209 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-07 01:13:55.481212 | orchestrator | Wednesday 07 January 2026 01:10:39 +0000 (0:00:06.053) 0:06:14.629 ***** 2026-01-07 01:13:55.481216 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481220 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481224 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481228 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481231 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481235 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481239 | orchestrator | 2026-01-07 01:13:55.481243 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-07 01:13:55.481246 | orchestrator | Wednesday 07 January 2026 01:10:41 +0000 (0:00:01.862) 0:06:16.492 ***** 2026-01-07 01:13:55.481250 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:13:55.481254 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:13:55.481258 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:13:55.481261 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:13:55.481265 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:13:55.481269 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:13:55.481273 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:13:55.481277 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481281 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:13:55.481284 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481288 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:13:55.481292 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481368 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:13:55.481381 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:13:55.481385 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:13:55.481389 | orchestrator | 2026-01-07 01:13:55.481393 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-07 01:13:55.481397 | orchestrator | Wednesday 07 January 2026 01:10:44 +0000 (0:00:03.703) 0:06:20.195 ***** 2026-01-07 01:13:55.481401 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481405 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481408 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481417 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481426 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481430 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481433 | orchestrator | 2026-01-07 01:13:55.481437 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-07 01:13:55.481441 | orchestrator | Wednesday 07 January 2026 01:10:45 +0000 (0:00:00.863) 0:06:21.059 ***** 2026-01-07 01:13:55.481445 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:13:55.481449 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:13:55.481453 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:13:55.481456 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:13:55.481460 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:13:55.481464 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:13:55.481468 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481472 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481478 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481482 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481486 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481490 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481494 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481497 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481501 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:13:55.481505 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481509 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481513 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481516 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481520 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481524 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:13:55.481528 | orchestrator | 2026-01-07 01:13:55.481532 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-07 01:13:55.481535 | orchestrator | Wednesday 07 January 2026 01:10:50 +0000 (0:00:05.067) 0:06:26.126 ***** 2026-01-07 01:13:55.481539 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:13:55.481543 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:13:55.481547 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:13:55.481551 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:13:55.481555 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:13:55.481561 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:13:55.481565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:13:55.481569 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:13:55.481572 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:13:55.481576 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:13:55.481580 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:13:55.481584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:13:55.481588 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:13:55.481591 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481595 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:13:55.481601 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:13:55.481605 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481609 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:13:55.481613 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481616 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:13:55.481620 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:13:55.481624 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:13:55.481628 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:13:55.481632 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:13:55.481636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:13:55.481639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:13:55.481643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:13:55.481647 | orchestrator | 2026-01-07 01:13:55.481651 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-07 01:13:55.481654 | orchestrator | Wednesday 07 January 2026 01:10:57 +0000 (0:00:06.606) 0:06:32.733 ***** 2026-01-07 01:13:55.481658 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481662 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481666 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481670 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481673 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481677 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481681 | orchestrator | 2026-01-07 01:13:55.481686 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-07 01:13:55.481690 | orchestrator | Wednesday 07 January 2026 01:10:58 +0000 (0:00:00.524) 0:06:33.258 ***** 2026-01-07 01:13:55.481694 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481698 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481702 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481709 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481713 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481717 | orchestrator | 2026-01-07 01:13:55.481720 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-07 01:13:55.481724 | orchestrator | Wednesday 07 January 2026 01:10:58 +0000 (0:00:00.742) 0:06:34.000 ***** 2026-01-07 01:13:55.481728 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481734 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481738 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481742 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.481745 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.481749 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.481753 | orchestrator | 2026-01-07 01:13:55.481757 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-07 01:13:55.481761 | orchestrator | Wednesday 07 January 2026 01:11:00 +0000 (0:00:01.755) 0:06:35.755 ***** 2026-01-07 01:13:55.481765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481781 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481801 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.481812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.481816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481820 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.481833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481837 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.481845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481849 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.481860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.481864 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481868 | orchestrator | 2026-01-07 01:13:55.481871 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-07 01:13:55.481875 | orchestrator | Wednesday 07 January 2026 01:11:01 +0000 (0:00:01.390) 0:06:37.146 ***** 2026-01-07 01:13:55.481879 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:13:55.481883 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481887 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.481893 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:13:55.481897 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481901 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.481904 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:13:55.481908 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481912 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.481916 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:13:55.481920 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481925 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.481929 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:13:55.481933 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481937 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.481940 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:13:55.481944 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:13:55.481948 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.481952 | orchestrator | 2026-01-07 01:13:55.481955 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-01-07 01:13:55.481959 | orchestrator | Wednesday 07 January 2026 01:11:02 +0000 (0:00:00.643) 0:06:37.789 ***** 2026-01-07 01:13:55.481963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:13:55.481998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:13:55.482057 | orchestrator | 2026-01-07 01:13:55.482064 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-01-07 01:13:55.482070 | orchestrator | Wednesday 07 January 2026 01:11:05 +0000 (0:00:03.216) 0:06:41.005 ***** 2026-01-07 01:13:55.482074 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 01:13:55.482078 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482082 | orchestrator | } 2026-01-07 01:13:55.482085 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 01:13:55.482089 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482093 | orchestrator | } 2026-01-07 01:13:55.482097 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 01:13:55.482101 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482104 | orchestrator | } 2026-01-07 01:13:55.482108 | orchestrator | changed: [testbed-node-0] => { 2026-01-07 01:13:55.482112 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482116 | orchestrator | } 2026-01-07 01:13:55.482120 | orchestrator | changed: [testbed-node-1] => { 2026-01-07 01:13:55.482123 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482127 | orchestrator | } 2026-01-07 01:13:55.482131 | orchestrator | changed: [testbed-node-2] => { 2026-01-07 01:13:55.482135 | orchestrator |  "msg": "Notifying handlers" 2026-01-07 01:13:55.482139 | orchestrator | } 2026-01-07 01:13:55.482143 | orchestrator | 2026-01-07 01:13:55.482146 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-07 01:13:55.482150 | orchestrator | Wednesday 07 January 2026 01:11:06 +0000 (0:00:00.661) 0:06:41.667 ***** 2026-01-07 01:13:55.482156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.482160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.482164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.482168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.482182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482186 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482190 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:13:55.482200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:13:55.482204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482210 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.482220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482224 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.482234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482238 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:13:55.482246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:13:55.482253 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482257 | orchestrator | 2026-01-07 01:13:55.482260 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:13:55.482264 | orchestrator | Wednesday 07 January 2026 01:11:08 +0000 (0:00:02.034) 0:06:43.702 ***** 2026-01-07 01:13:55.482268 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482272 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482276 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482280 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482284 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482287 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482291 | orchestrator | 2026-01-07 01:13:55.482311 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482317 | orchestrator | Wednesday 07 January 2026 01:11:09 +0000 (0:00:00.825) 0:06:44.527 ***** 2026-01-07 01:13:55.482323 | orchestrator | 2026-01-07 01:13:55.482328 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482334 | orchestrator | Wednesday 07 January 2026 01:11:09 +0000 (0:00:00.128) 0:06:44.656 ***** 2026-01-07 01:13:55.482339 | orchestrator | 2026-01-07 01:13:55.482345 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482351 | orchestrator | Wednesday 07 January 2026 01:11:09 +0000 (0:00:00.126) 0:06:44.782 ***** 2026-01-07 01:13:55.482357 | orchestrator | 2026-01-07 01:13:55.482367 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482374 | orchestrator | Wednesday 07 January 2026 01:11:09 +0000 (0:00:00.128) 0:06:44.911 ***** 2026-01-07 01:13:55.482380 | orchestrator | 2026-01-07 01:13:55.482387 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482391 | orchestrator | Wednesday 07 January 2026 01:11:09 +0000 (0:00:00.126) 0:06:45.037 ***** 2026-01-07 01:13:55.482395 | orchestrator | 2026-01-07 01:13:55.482398 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:13:55.482402 | orchestrator | Wednesday 07 January 2026 01:11:10 +0000 (0:00:00.318) 0:06:45.356 ***** 2026-01-07 01:13:55.482406 | orchestrator | 2026-01-07 01:13:55.482410 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-07 01:13:55.482414 | orchestrator | Wednesday 07 January 2026 01:11:10 +0000 (0:00:00.132) 0:06:45.488 ***** 2026-01-07 01:13:55.482417 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.482421 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.482425 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.482429 | orchestrator | 2026-01-07 01:13:55.482432 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-07 01:13:55.482436 | orchestrator | Wednesday 07 January 2026 01:11:21 +0000 (0:00:11.228) 0:06:56.717 ***** 2026-01-07 01:13:55.482440 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.482444 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.482448 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.482451 | orchestrator | 2026-01-07 01:13:55.482455 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-07 01:13:55.482459 | orchestrator | Wednesday 07 January 2026 01:11:38 +0000 (0:00:16.727) 0:07:13.444 ***** 2026-01-07 01:13:55.482462 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.482466 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.482470 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.482474 | orchestrator | 2026-01-07 01:13:55.482477 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-07 01:13:55.482486 | orchestrator | Wednesday 07 January 2026 01:11:53 +0000 (0:00:15.323) 0:07:28.767 ***** 2026-01-07 01:13:55.482490 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.482494 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.482498 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.482502 | orchestrator | 2026-01-07 01:13:55.482505 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-07 01:13:55.482509 | orchestrator | Wednesday 07 January 2026 01:12:23 +0000 (0:00:30.449) 0:07:59.217 ***** 2026-01-07 01:13:55.482513 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.482517 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.482520 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.482524 | orchestrator | 2026-01-07 01:13:55.482528 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-07 01:13:55.482532 | orchestrator | Wednesday 07 January 2026 01:12:24 +0000 (0:00:00.714) 0:07:59.932 ***** 2026-01-07 01:13:55.482536 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.482539 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.482543 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.482547 | orchestrator | 2026-01-07 01:13:55.482551 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-07 01:13:55.482554 | orchestrator | Wednesday 07 January 2026 01:12:25 +0000 (0:00:00.692) 0:08:00.624 ***** 2026-01-07 01:13:55.482558 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:13:55.482562 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:13:55.482566 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:13:55.482569 | orchestrator | 2026-01-07 01:13:55.482573 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-07 01:13:55.482577 | orchestrator | Wednesday 07 January 2026 01:12:46 +0000 (0:00:21.364) 0:08:21.989 ***** 2026-01-07 01:13:55.482581 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482585 | orchestrator | 2026-01-07 01:13:55.482589 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-07 01:13:55.482592 | orchestrator | Wednesday 07 January 2026 01:12:47 +0000 (0:00:00.358) 0:08:22.348 ***** 2026-01-07 01:13:55.482596 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482600 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482604 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482608 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482611 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482615 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-07 01:13:55.482619 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:13:55.482623 | orchestrator | 2026-01-07 01:13:55.482627 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-07 01:13:55.482630 | orchestrator | Wednesday 07 January 2026 01:13:07 +0000 (0:00:20.873) 0:08:43.221 ***** 2026-01-07 01:13:55.482634 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482638 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482642 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482646 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482649 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482653 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482657 | orchestrator | 2026-01-07 01:13:55.482661 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-07 01:13:55.482664 | orchestrator | Wednesday 07 January 2026 01:13:17 +0000 (0:00:09.529) 0:08:52.751 ***** 2026-01-07 01:13:55.482668 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482672 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482676 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482679 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482683 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482687 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-07 01:13:55.482693 | orchestrator | 2026-01-07 01:13:55.482697 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:13:55.482703 | orchestrator | Wednesday 07 January 2026 01:13:21 +0000 (0:00:03.593) 0:08:56.345 ***** 2026-01-07 01:13:55.482707 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:13:55.482711 | orchestrator | 2026-01-07 01:13:55.482714 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:13:55.482718 | orchestrator | Wednesday 07 January 2026 01:13:34 +0000 (0:00:12.955) 0:09:09.300 ***** 2026-01-07 01:13:55.482722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:13:55.482726 | orchestrator | 2026-01-07 01:13:55.482730 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-07 01:13:55.482734 | orchestrator | Wednesday 07 January 2026 01:13:35 +0000 (0:00:01.240) 0:09:10.540 ***** 2026-01-07 01:13:55.482737 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482741 | orchestrator | 2026-01-07 01:13:55.482745 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-07 01:13:55.482749 | orchestrator | Wednesday 07 January 2026 01:13:36 +0000 (0:00:01.441) 0:09:11.982 ***** 2026-01-07 01:13:55.482753 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:13:55.482756 | orchestrator | 2026-01-07 01:13:55.482760 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-07 01:13:55.482764 | orchestrator | 2026-01-07 01:13:55.482768 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-07 01:13:55.482772 | orchestrator | Wednesday 07 January 2026 01:13:49 +0000 (0:00:12.895) 0:09:24.878 ***** 2026-01-07 01:13:55.482775 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:13:55.482779 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:13:55.482783 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:13:55.482787 | orchestrator | 2026-01-07 01:13:55.482790 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-07 01:13:55.482794 | orchestrator | 2026-01-07 01:13:55.482798 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-07 01:13:55.482803 | orchestrator | Wednesday 07 January 2026 01:13:50 +0000 (0:00:01.023) 0:09:25.901 ***** 2026-01-07 01:13:55.482807 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482811 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482815 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.482819 | orchestrator | 2026-01-07 01:13:55.482823 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-07 01:13:55.482826 | orchestrator | 2026-01-07 01:13:55.482830 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-07 01:13:55.482834 | orchestrator | Wednesday 07 January 2026 01:13:51 +0000 (0:00:00.741) 0:09:26.642 ***** 2026-01-07 01:13:55.482838 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-07 01:13:55.482841 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:13:55.482845 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482849 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-07 01:13:55.482853 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-07 01:13:55.482857 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.482861 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:13:55.482865 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-07 01:13:55.482868 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:13:55.482872 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482876 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-07 01:13:55.482880 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-07 01:13:55.482886 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.482890 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:13:55.482894 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-07 01:13:55.482898 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:13:55.482901 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482905 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-07 01:13:55.482909 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-07 01:13:55.482913 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.482917 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:13:55.482920 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-07 01:13:55.482924 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:13:55.482928 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482932 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-07 01:13:55.482936 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-07 01:13:55.482939 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.482943 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.482947 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-07 01:13:55.482951 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:13:55.482955 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482959 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-07 01:13:55.482962 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-07 01:13:55.482966 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.482970 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.482974 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-07 01:13:55.482978 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:13:55.482982 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:13:55.482988 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-07 01:13:55.482992 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-07 01:13:55.482996 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-07 01:13:55.483000 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.483003 | orchestrator | 2026-01-07 01:13:55.483007 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-07 01:13:55.483011 | orchestrator | 2026-01-07 01:13:55.483015 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-07 01:13:55.483019 | orchestrator | Wednesday 07 January 2026 01:13:52 +0000 (0:00:01.244) 0:09:27.887 ***** 2026-01-07 01:13:55.483022 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-07 01:13:55.483026 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-07 01:13:55.483030 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.483034 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-07 01:13:55.483038 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-07 01:13:55.483041 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.483045 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-07 01:13:55.483049 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-07 01:13:55.483053 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.483056 | orchestrator | 2026-01-07 01:13:55.483060 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-07 01:13:55.483064 | orchestrator | 2026-01-07 01:13:55.483068 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-07 01:13:55.483074 | orchestrator | Wednesday 07 January 2026 01:13:53 +0000 (0:00:00.539) 0:09:28.426 ***** 2026-01-07 01:13:55.483078 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.483082 | orchestrator | 2026-01-07 01:13:55.483086 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-07 01:13:55.483090 | orchestrator | 2026-01-07 01:13:55.483095 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-07 01:13:55.483099 | orchestrator | Wednesday 07 January 2026 01:13:54 +0000 (0:00:01.107) 0:09:29.534 ***** 2026-01-07 01:13:55.483103 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:13:55.483107 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:13:55.483110 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:13:55.483114 | orchestrator | 2026-01-07 01:13:55.483118 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:13:55.483122 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:13:55.483126 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=48  rescued=0 ignored=0 2026-01-07 01:13:55.483130 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-07 01:13:55.483134 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-07 01:13:55.483138 | orchestrator | testbed-node-3 : ok=49  changed=29  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 01:13:55.483142 | orchestrator | testbed-node-4 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 01:13:55.483146 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 01:13:55.483149 | orchestrator | 2026-01-07 01:13:55.483153 | orchestrator | 2026-01-07 01:13:55.483157 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:13:55.483161 | orchestrator | Wednesday 07 January 2026 01:13:54 +0000 (0:00:00.427) 0:09:29.961 ***** 2026-01-07 01:13:55.483165 | orchestrator | =============================================================================== 2026-01-07 01:13:55.483168 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.31s 2026-01-07 01:13:55.483172 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.45s 2026-01-07 01:13:55.483176 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 21.94s 2026-01-07 01:13:55.483180 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.83s 2026-01-07 01:13:55.483184 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.36s 2026-01-07 01:13:55.483188 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.87s 2026-01-07 01:13:55.483191 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.95s 2026-01-07 01:13:55.483195 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.28s 2026-01-07 01:13:55.483199 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.73s 2026-01-07 01:13:55.483203 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.60s 2026-01-07 01:13:55.483206 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.32s 2026-01-07 01:13:55.483210 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.88s 2026-01-07 01:13:55.483214 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 13.13s 2026-01-07 01:13:55.483223 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.96s 2026-01-07 01:13:55.483227 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.90s 2026-01-07 01:13:55.483231 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.66s 2026-01-07 01:13:55.483235 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.05s 2026-01-07 01:13:55.483239 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.23s 2026-01-07 01:13:55.483243 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.56s 2026-01-07 01:13:55.483246 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.53s 2026-01-07 01:13:55.483250 | orchestrator | 2026-01-07 01:13:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:13:58.517197 | orchestrator | 2026-01-07 01:13:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:01.552788 | orchestrator | 2026-01-07 01:14:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:04.593105 | orchestrator | 2026-01-07 01:14:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:07.640406 | orchestrator | 2026-01-07 01:14:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:10.680859 | orchestrator | 2026-01-07 01:14:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:13.725424 | orchestrator | 2026-01-07 01:14:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:16.770436 | orchestrator | 2026-01-07 01:14:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:19.812016 | orchestrator | 2026-01-07 01:14:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:22.858482 | orchestrator | 2026-01-07 01:14:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:25.900118 | orchestrator | 2026-01-07 01:14:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:28.937344 | orchestrator | 2026-01-07 01:14:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:31.981335 | orchestrator | 2026-01-07 01:14:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:35.043175 | orchestrator | 2026-01-07 01:14:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:38.083206 | orchestrator | 2026-01-07 01:14:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:41.123270 | orchestrator | 2026-01-07 01:14:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:44.165217 | orchestrator | 2026-01-07 01:14:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:47.209730 | orchestrator | 2026-01-07 01:14:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:50.255559 | orchestrator | 2026-01-07 01:14:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:53.305072 | orchestrator | 2026-01-07 01:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:14:56.343976 | orchestrator | 2026-01-07 01:14:56.657616 | orchestrator | 2026-01-07 01:14:56.663190 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Jan 7 01:14:56 UTC 2026 2026-01-07 01:14:56.663245 | orchestrator | 2026-01-07 01:14:56.995498 | orchestrator | ok: Runtime: 0:35:34.450512 2026-01-07 01:14:57.271852 | 2026-01-07 01:14:57.272046 | TASK [Bootstrap services] 2026-01-07 01:14:58.019701 | orchestrator | 2026-01-07 01:14:58.019809 | orchestrator | # BOOTSTRAP 2026-01-07 01:14:58.019820 | orchestrator | 2026-01-07 01:14:58.019828 | orchestrator | + set -e 2026-01-07 01:14:58.019835 | orchestrator | + echo 2026-01-07 01:14:58.019843 | orchestrator | + echo '# BOOTSTRAP' 2026-01-07 01:14:58.019852 | orchestrator | + echo 2026-01-07 01:14:58.019876 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-07 01:14:58.027273 | orchestrator | + set -e 2026-01-07 01:14:58.027336 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-07 01:15:02.121988 | orchestrator | 2026-01-07 01:15:02 | INFO  | It takes a moment until task a999e9e6-460d-4234-8b8a-29e2973abb5c (flavor-manager) has been started and output is visible here. 2026-01-07 01:15:07.978847 | orchestrator | 2026-01-07 01:15:04 | INFO  | Flavor SCS-1L-1 created 2026-01-07 01:15:07.978937 | orchestrator | 2026-01-07 01:15:04 | INFO  | Flavor SCS-1L-1-5 created 2026-01-07 01:15:07.978947 | orchestrator | 2026-01-07 01:15:04 | INFO  | Flavor SCS-1V-2 created 2026-01-07 01:15:07.978992 | orchestrator | 2026-01-07 01:15:04 | INFO  | Flavor SCS-1V-2-5 created 2026-01-07 01:15:07.979001 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-1V-4 created 2026-01-07 01:15:07.979008 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-1V-4-10 created 2026-01-07 01:15:07.979012 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-1V-8 created 2026-01-07 01:15:07.979017 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-1V-8-20 created 2026-01-07 01:15:07.979025 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-2V-4 created 2026-01-07 01:15:07.979029 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-2V-4-10 created 2026-01-07 01:15:07.979033 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-2V-8 created 2026-01-07 01:15:07.979037 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-2V-8-20 created 2026-01-07 01:15:07.979041 | orchestrator | 2026-01-07 01:15:05 | INFO  | Flavor SCS-2V-16 created 2026-01-07 01:15:07.979045 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-2V-16-50 created 2026-01-07 01:15:07.979048 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-8 created 2026-01-07 01:15:07.979052 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-8-20 created 2026-01-07 01:15:07.979057 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-16 created 2026-01-07 01:15:07.979064 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-16-50 created 2026-01-07 01:15:07.979071 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-32 created 2026-01-07 01:15:07.979086 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-4V-32-100 created 2026-01-07 01:15:07.979090 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-8V-16 created 2026-01-07 01:15:07.979095 | orchestrator | 2026-01-07 01:15:06 | INFO  | Flavor SCS-8V-16-50 created 2026-01-07 01:15:07.979099 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-8V-32 created 2026-01-07 01:15:07.979103 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-8V-32-100 created 2026-01-07 01:15:07.979107 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-16V-32 created 2026-01-07 01:15:07.979111 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-16V-32-100 created 2026-01-07 01:15:07.979114 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-2V-4-20s created 2026-01-07 01:15:07.979118 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-4V-8-50s created 2026-01-07 01:15:07.979123 | orchestrator | 2026-01-07 01:15:07 | INFO  | Flavor SCS-8V-32-100s created 2026-01-07 01:15:10.315219 | orchestrator | 2026-01-07 01:15:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-07 01:15:20.533888 | orchestrator | 2026-01-07 01:15:20 | INFO  | Task 0892fd79-4adb-484f-8c5a-21b2f44c26e9 (bootstrap-basic) was prepared for execution. 2026-01-07 01:15:20.533958 | orchestrator | 2026-01-07 01:15:20 | INFO  | It takes a moment until task 0892fd79-4adb-484f-8c5a-21b2f44c26e9 (bootstrap-basic) has been started and output is visible here. 2026-01-07 01:16:02.793046 | orchestrator | 2026-01-07 01:16:02.793113 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-07 01:16:02.793122 | orchestrator | 2026-01-07 01:16:02.793129 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 01:16:02.793135 | orchestrator | Wednesday 07 January 2026 01:15:24 +0000 (0:00:00.061) 0:00:00.061 ***** 2026-01-07 01:16:02.793141 | orchestrator | ok: [localhost] 2026-01-07 01:16:02.793147 | orchestrator | 2026-01-07 01:16:02.793152 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-07 01:16:02.793158 | orchestrator | Wednesday 07 January 2026 01:15:26 +0000 (0:00:01.766) 0:00:01.828 ***** 2026-01-07 01:16:02.793163 | orchestrator | ok: [localhost] 2026-01-07 01:16:02.793168 | orchestrator | 2026-01-07 01:16:02.793173 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-07 01:16:02.793177 | orchestrator | Wednesday 07 January 2026 01:15:34 +0000 (0:00:07.865) 0:00:09.694 ***** 2026-01-07 01:16:02.793180 | orchestrator | changed: [localhost] 2026-01-07 01:16:02.793184 | orchestrator | 2026-01-07 01:16:02.793188 | orchestrator | TASK [Create public network] *************************************************** 2026-01-07 01:16:02.793191 | orchestrator | Wednesday 07 January 2026 01:15:41 +0000 (0:00:07.481) 0:00:17.175 ***** 2026-01-07 01:16:02.793195 | orchestrator | changed: [localhost] 2026-01-07 01:16:02.793198 | orchestrator | 2026-01-07 01:16:02.793201 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-07 01:16:02.793204 | orchestrator | Wednesday 07 January 2026 01:15:46 +0000 (0:00:04.684) 0:00:21.859 ***** 2026-01-07 01:16:02.793209 | orchestrator | changed: [localhost] 2026-01-07 01:16:02.793213 | orchestrator | 2026-01-07 01:16:02.793216 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-07 01:16:02.793219 | orchestrator | Wednesday 07 January 2026 01:15:52 +0000 (0:00:05.768) 0:00:27.628 ***** 2026-01-07 01:16:02.793222 | orchestrator | changed: [localhost] 2026-01-07 01:16:02.793226 | orchestrator | 2026-01-07 01:16:02.793229 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-07 01:16:02.793232 | orchestrator | Wednesday 07 January 2026 01:15:55 +0000 (0:00:03.479) 0:00:31.107 ***** 2026-01-07 01:16:02.793235 | orchestrator | changed: [localhost] 2026-01-07 01:16:02.793238 | orchestrator | 2026-01-07 01:16:02.793242 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-07 01:16:02.793250 | orchestrator | Wednesday 07 January 2026 01:15:59 +0000 (0:00:03.435) 0:00:34.543 ***** 2026-01-07 01:16:02.793253 | orchestrator | ok: [localhost] 2026-01-07 01:16:02.793256 | orchestrator | 2026-01-07 01:16:02.793260 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:16:02.793263 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:16:02.793267 | orchestrator | 2026-01-07 01:16:02.793270 | orchestrator | 2026-01-07 01:16:02.793274 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:16:02.793277 | orchestrator | Wednesday 07 January 2026 01:16:02 +0000 (0:00:03.403) 0:00:37.946 ***** 2026-01-07 01:16:02.793280 | orchestrator | =============================================================================== 2026-01-07 01:16:02.793283 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.87s 2026-01-07 01:16:02.793287 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.48s 2026-01-07 01:16:02.793290 | orchestrator | Set public network to default ------------------------------------------- 5.77s 2026-01-07 01:16:02.793293 | orchestrator | Create public network --------------------------------------------------- 4.68s 2026-01-07 01:16:02.793307 | orchestrator | Create public subnet ---------------------------------------------------- 3.48s 2026-01-07 01:16:02.793311 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.44s 2026-01-07 01:16:02.793314 | orchestrator | Create manager role ----------------------------------------------------- 3.40s 2026-01-07 01:16:02.793317 | orchestrator | Gathering Facts --------------------------------------------------------- 1.77s 2026-01-07 01:16:05.235149 | orchestrator | 2026-01-07 01:16:05 | INFO  | It takes a moment until task 2ec7c2ba-28b6-4002-b293-c1942aae6936 (image-manager) has been started and output is visible here. 2026-01-07 01:16:42.782079 | orchestrator | 2026-01-07 01:16:07 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-07 01:16:42.782132 | orchestrator | 2026-01-07 01:16:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-07 01:16:42.782139 | orchestrator | 2026-01-07 01:16:07 | INFO  | Importing image Cirros 0.6.2 2026-01-07 01:16:42.782144 | orchestrator | 2026-01-07 01:16:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:16:42.782149 | orchestrator | 2026-01-07 01:16:09 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:16:42.782153 | orchestrator | 2026-01-07 01:16:11 | INFO  | Waiting for import to complete... 2026-01-07 01:16:42.782157 | orchestrator | 2026-01-07 01:16:21 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-07 01:16:42.782162 | orchestrator | 2026-01-07 01:16:21 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-07 01:16:42.782166 | orchestrator | 2026-01-07 01:16:21 | INFO  | Setting internal_version = 0.6.2 2026-01-07 01:16:42.782170 | orchestrator | 2026-01-07 01:16:21 | INFO  | Setting image_original_user = cirros 2026-01-07 01:16:42.782174 | orchestrator | 2026-01-07 01:16:21 | INFO  | Adding tag os:cirros 2026-01-07 01:16:42.782178 | orchestrator | 2026-01-07 01:16:22 | INFO  | Setting property architecture: x86_64 2026-01-07 01:16:42.782184 | orchestrator | 2026-01-07 01:16:22 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:16:42.782191 | orchestrator | 2026-01-07 01:16:22 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:16:42.782197 | orchestrator | 2026-01-07 01:16:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:16:42.782203 | orchestrator | 2026-01-07 01:16:23 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:16:42.782209 | orchestrator | 2026-01-07 01:16:23 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:16:42.782215 | orchestrator | 2026-01-07 01:16:23 | INFO  | Setting property os_distro: cirros 2026-01-07 01:16:42.782222 | orchestrator | 2026-01-07 01:16:23 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:16:42.782228 | orchestrator | 2026-01-07 01:16:23 | INFO  | Setting property replace_frequency: never 2026-01-07 01:16:42.782234 | orchestrator | 2026-01-07 01:16:24 | INFO  | Setting property uuid_validity: none 2026-01-07 01:16:42.782240 | orchestrator | 2026-01-07 01:16:24 | INFO  | Setting property provided_until: none 2026-01-07 01:16:42.782247 | orchestrator | 2026-01-07 01:16:24 | INFO  | Setting property image_description: Cirros 2026-01-07 01:16:42.782253 | orchestrator | 2026-01-07 01:16:24 | INFO  | Setting property image_name: Cirros 2026-01-07 01:16:42.782260 | orchestrator | 2026-01-07 01:16:24 | INFO  | Setting property internal_version: 0.6.2 2026-01-07 01:16:42.782266 | orchestrator | 2026-01-07 01:16:25 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:16:42.782286 | orchestrator | 2026-01-07 01:16:25 | INFO  | Setting property os_version: 0.6.2 2026-01-07 01:16:42.782300 | orchestrator | 2026-01-07 01:16:25 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:16:42.782308 | orchestrator | 2026-01-07 01:16:25 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-07 01:16:42.782316 | orchestrator | 2026-01-07 01:16:25 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-07 01:16:42.782322 | orchestrator | 2026-01-07 01:16:25 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-07 01:16:42.782330 | orchestrator | 2026-01-07 01:16:25 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-07 01:16:42.782336 | orchestrator | 2026-01-07 01:16:25 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-07 01:16:42.782344 | orchestrator | 2026-01-07 01:16:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-07 01:16:42.782349 | orchestrator | 2026-01-07 01:16:26 | INFO  | Importing image Cirros 0.6.3 2026-01-07 01:16:42.782353 | orchestrator | 2026-01-07 01:16:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:16:42.782357 | orchestrator | 2026-01-07 01:16:26 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:16:42.782360 | orchestrator | 2026-01-07 01:16:28 | INFO  | Waiting for import to complete... 2026-01-07 01:16:42.782373 | orchestrator | 2026-01-07 01:16:38 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-07 01:16:42.782377 | orchestrator | 2026-01-07 01:16:38 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-07 01:16:42.782381 | orchestrator | 2026-01-07 01:16:38 | INFO  | Setting internal_version = 0.6.3 2026-01-07 01:16:42.782385 | orchestrator | 2026-01-07 01:16:38 | INFO  | Setting image_original_user = cirros 2026-01-07 01:16:42.782389 | orchestrator | 2026-01-07 01:16:38 | INFO  | Adding tag os:cirros 2026-01-07 01:16:42.782393 | orchestrator | 2026-01-07 01:16:38 | INFO  | Setting property architecture: x86_64 2026-01-07 01:16:42.782396 | orchestrator | 2026-01-07 01:16:39 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:16:42.782400 | orchestrator | 2026-01-07 01:16:39 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:16:42.782404 | orchestrator | 2026-01-07 01:16:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:16:42.782408 | orchestrator | 2026-01-07 01:16:39 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:16:42.782412 | orchestrator | 2026-01-07 01:16:40 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:16:42.782416 | orchestrator | 2026-01-07 01:16:40 | INFO  | Setting property os_distro: cirros 2026-01-07 01:16:42.782420 | orchestrator | 2026-01-07 01:16:40 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:16:42.782423 | orchestrator | 2026-01-07 01:16:40 | INFO  | Setting property replace_frequency: never 2026-01-07 01:16:42.782427 | orchestrator | 2026-01-07 01:16:40 | INFO  | Setting property uuid_validity: none 2026-01-07 01:16:42.782431 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property provided_until: none 2026-01-07 01:16:42.782435 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property image_description: Cirros 2026-01-07 01:16:42.782439 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property image_name: Cirros 2026-01-07 01:16:42.782443 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property internal_version: 0.6.3 2026-01-07 01:16:42.782450 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:16:42.782454 | orchestrator | 2026-01-07 01:16:41 | INFO  | Setting property os_version: 0.6.3 2026-01-07 01:16:42.782458 | orchestrator | 2026-01-07 01:16:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:16:42.782461 | orchestrator | 2026-01-07 01:16:42 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-07 01:16:42.782465 | orchestrator | 2026-01-07 01:16:42 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-07 01:16:42.782469 | orchestrator | 2026-01-07 01:16:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-07 01:16:42.782473 | orchestrator | 2026-01-07 01:16:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-07 01:16:43.126607 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-07 01:16:45.417927 | orchestrator | 2026-01-07 01:16:45 | INFO  | date: 2026-01-06 2026-01-07 01:16:45.417997 | orchestrator | 2026-01-07 01:16:45 | INFO  | image: octavia-amphora-haproxy-2025.1.20260106.qcow2 2026-01-07 01:16:45.418045 | orchestrator | 2026-01-07 01:16:45 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260106.qcow2 2026-01-07 01:16:45.418541 | orchestrator | 2026-01-07 01:16:45 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260106.qcow2.CHECKSUM 2026-01-07 01:16:45.604569 | orchestrator | 2026-01-07 01:16:45 | INFO  | checksum: 8cb5c5b3ed8717034b299395216b74cd71a8fd3f08074645b5ba560ad4b3fb7c 2026-01-07 01:16:45.699034 | orchestrator | 2026-01-07 01:16:45 | INFO  | It takes a moment until task 74efbdc8-a33c-444a-990f-0ae5e1a72556 (image-manager) has been started and output is visible here. 2026-01-07 01:19:43.721232 | orchestrator | 2026-01-07 01:16:47 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-06' 2026-01-07 01:19:43.721305 | orchestrator | 2026-01-07 01:16:47 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260106.qcow2: 200 2026-01-07 01:19:43.721317 | orchestrator | 2026-01-07 01:16:47 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-06 2026-01-07 01:19:43.721342 | orchestrator | 2026-01-07 01:16:47 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260106.qcow2 2026-01-07 01:19:43.721350 | orchestrator | 2026-01-07 01:16:49 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721358 | orchestrator | 2026-01-07 01:16:51 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721365 | orchestrator | 2026-01-07 01:17:01 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721372 | orchestrator | 2026-01-07 01:17:12 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721378 | orchestrator | 2026-01-07 01:17:22 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721386 | orchestrator | 2026-01-07 01:17:32 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721394 | orchestrator | 2026-01-07 01:17:42 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721400 | orchestrator | 2026-01-07 01:17:52 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721407 | orchestrator | 2026-01-07 01:18:02 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721413 | orchestrator | 2026-01-07 01:18:12 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721432 | orchestrator | 2026-01-07 01:18:14 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721437 | orchestrator | 2026-01-07 01:18:16 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721441 | orchestrator | 2026-01-07 01:18:18 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721445 | orchestrator | 2026-01-07 01:18:20 | ERROR  | Image OpenStack Octavia Amphora 2026-01-06 seems stuck in queued state 2026-01-07 01:19:43.721450 | orchestrator | 2026-01-07 01:18:20 | WARNING  | Deleting stuck image OpenStack Octavia Amphora 2026-01-06 and retrying import 2026-01-07 01:19:43.721454 | orchestrator | 2026-01-07 01:18:21 | INFO  | Retry attempt 1/1 for image OpenStack Octavia Amphora 2026-01-06 2026-01-07 01:19:43.721458 | orchestrator | 2026-01-07 01:18:22 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721462 | orchestrator | 2026-01-07 01:18:24 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721465 | orchestrator | 2026-01-07 01:18:34 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721469 | orchestrator | 2026-01-07 01:18:44 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721473 | orchestrator | 2026-01-07 01:18:54 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721477 | orchestrator | 2026-01-07 01:19:05 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721481 | orchestrator | 2026-01-07 01:19:15 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721484 | orchestrator | 2026-01-07 01:19:25 | INFO  | Waiting for import to complete... 2026-01-07 01:19:43.721495 | orchestrator | 2026-01-07 01:19:35 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721499 | orchestrator | 2026-01-07 01:19:37 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721503 | orchestrator | 2026-01-07 01:19:39 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721507 | orchestrator | 2026-01-07 01:19:41 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:19:43.721511 | orchestrator | 2026-01-07 01:19:43 | ERROR  | Image OpenStack Octavia Amphora 2026-01-06 seems stuck in queued state 2026-01-07 01:19:43.721515 | orchestrator | 2026-01-07 01:19:43 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-07 01:19:43.721519 | orchestrator | 2026-01-07 01:19:43 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-07 01:19:43.721522 | orchestrator | 2026-01-07 01:19:43 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-07 01:19:43.721526 | orchestrator | 2026-01-07 01:19:43 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-07 01:19:43.721530 | orchestrator | 2026-01-07 01:19:43.721544 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-01-07 01:19:44.515282 | orchestrator | ERROR 2026-01-07 01:19:44.515793 | orchestrator | { 2026-01-07 01:19:44.515916 | orchestrator | "delta": "0:04:46.398240", 2026-01-07 01:19:44.515995 | orchestrator | "end": "2026-01-07 01:19:44.028779", 2026-01-07 01:19:44.516061 | orchestrator | "msg": "non-zero return code", 2026-01-07 01:19:44.516118 | orchestrator | "rc": 1, 2026-01-07 01:19:44.516173 | orchestrator | "start": "2026-01-07 01:14:57.630539" 2026-01-07 01:19:44.516226 | orchestrator | } failure 2026-01-07 01:19:44.540282 | 2026-01-07 01:19:44.540529 | PLAY RECAP 2026-01-07 01:19:44.540825 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-01-07 01:19:44.540875 | 2026-01-07 01:19:44.820923 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-07 01:19:44.822097 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:19:45.599282 | 2026-01-07 01:19:45.599494 | PLAY [Post output play] 2026-01-07 01:19:45.616553 | 2026-01-07 01:19:45.616719 | LOOP [stage-output : Register sources] 2026-01-07 01:19:45.689581 | 2026-01-07 01:19:45.689925 | TASK [stage-output : Check sudo] 2026-01-07 01:19:46.592261 | orchestrator | sudo: a password is required 2026-01-07 01:19:46.739862 | orchestrator | ok: Runtime: 0:00:00.015127 2026-01-07 01:19:46.754908 | 2026-01-07 01:19:46.755071 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-07 01:19:46.794648 | 2026-01-07 01:19:46.794937 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-07 01:19:46.874808 | orchestrator | ok 2026-01-07 01:19:46.884463 | 2026-01-07 01:19:46.884642 | LOOP [stage-output : Ensure target folders exist] 2026-01-07 01:19:47.410945 | orchestrator | ok: "docs" 2026-01-07 01:19:47.411255 | 2026-01-07 01:19:47.714047 | orchestrator | ok: "artifacts" 2026-01-07 01:19:48.031064 | orchestrator | ok: "logs" 2026-01-07 01:19:48.052663 | 2026-01-07 01:19:48.052860 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-07 01:19:48.101768 | 2026-01-07 01:19:48.102057 | TASK [stage-output : Make all log files readable] 2026-01-07 01:19:48.468106 | orchestrator | ok 2026-01-07 01:19:48.481324 | 2026-01-07 01:19:48.481571 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-07 01:19:48.517649 | orchestrator | skipping: Conditional result was False 2026-01-07 01:19:48.532934 | 2026-01-07 01:19:48.533109 | TASK [stage-output : Discover log files for compression] 2026-01-07 01:19:48.560792 | orchestrator | skipping: Conditional result was False 2026-01-07 01:19:48.577000 | 2026-01-07 01:19:48.577180 | LOOP [stage-output : Archive everything from logs] 2026-01-07 01:19:48.622129 | 2026-01-07 01:19:48.622496 | PLAY [Post cleanup play] 2026-01-07 01:19:48.631770 | 2026-01-07 01:19:48.631913 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:19:48.690864 | orchestrator | ok 2026-01-07 01:19:48.704519 | 2026-01-07 01:19:48.704691 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:19:48.749787 | orchestrator | skipping: Conditional result was False 2026-01-07 01:19:48.765717 | 2026-01-07 01:19:48.765989 | TASK [Clean the cloud environment] 2026-01-07 01:19:49.399953 | orchestrator | 2026-01-07 01:19:49 - clean up servers 2026-01-07 01:19:50.160649 | orchestrator | 2026-01-07 01:19:50 - testbed-manager 2026-01-07 01:19:50.244377 | orchestrator | 2026-01-07 01:19:50 - testbed-node-1 2026-01-07 01:19:50.330343 | orchestrator | 2026-01-07 01:19:50 - testbed-node-0 2026-01-07 01:19:50.426638 | orchestrator | 2026-01-07 01:19:50 - testbed-node-4 2026-01-07 01:19:50.512533 | orchestrator | 2026-01-07 01:19:50 - testbed-node-5 2026-01-07 01:19:50.607652 | orchestrator | 2026-01-07 01:19:50 - testbed-node-3 2026-01-07 01:19:50.699466 | orchestrator | 2026-01-07 01:19:50 - testbed-node-2 2026-01-07 01:19:50.786384 | orchestrator | 2026-01-07 01:19:50 - clean up keypairs 2026-01-07 01:19:50.806645 | orchestrator | 2026-01-07 01:19:50 - testbed 2026-01-07 01:19:50.837655 | orchestrator | 2026-01-07 01:19:50 - wait for servers to be gone 2026-01-07 01:20:04.320620 | orchestrator | 2026-01-07 01:20:04 - clean up ports 2026-01-07 01:20:04.510002 | orchestrator | 2026-01-07 01:20:04 - 009b04bb-d39b-4448-9838-7e7c8928da83 2026-01-07 01:20:04.783582 | orchestrator | 2026-01-07 01:20:04 - 22849df8-5f2a-4248-a765-2b994e42e50e 2026-01-07 01:20:05.061313 | orchestrator | 2026-01-07 01:20:05 - 40a94084-f2ba-45ed-bd88-3af3671e2962 2026-01-07 01:20:05.274302 | orchestrator | 2026-01-07 01:20:05 - 42271af3-83fe-48df-936f-8587e120d246 2026-01-07 01:20:05.495733 | orchestrator | 2026-01-07 01:20:05 - 548944d1-5b6f-4301-93dc-79a73a53c188 2026-01-07 01:20:05.742249 | orchestrator | 2026-01-07 01:20:05 - 8ee6cc05-2124-4932-ab7d-6c7c4c96cac5 2026-01-07 01:20:05.965881 | orchestrator | 2026-01-07 01:20:05 - e9cdee58-56ee-4cfd-902c-e9f38170a462 2026-01-07 01:20:06.351017 | orchestrator | 2026-01-07 01:20:06 - clean up volumes 2026-01-07 01:20:06.475990 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-0-node-base 2026-01-07 01:20:06.515753 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-2-node-base 2026-01-07 01:20:06.552016 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-1-node-base 2026-01-07 01:20:06.591925 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-3-node-base 2026-01-07 01:20:06.630956 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-5-node-base 2026-01-07 01:20:06.671640 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-4-node-base 2026-01-07 01:20:06.709897 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-5-node-5 2026-01-07 01:20:06.747290 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-8-node-5 2026-01-07 01:20:06.794300 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-manager-base 2026-01-07 01:20:06.832647 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-3-node-3 2026-01-07 01:20:06.872233 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-0-node-3 2026-01-07 01:20:06.915772 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-6-node-3 2026-01-07 01:20:06.957826 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-4-node-4 2026-01-07 01:20:06.999858 | orchestrator | 2026-01-07 01:20:06 - testbed-volume-7-node-4 2026-01-07 01:20:07.040878 | orchestrator | 2026-01-07 01:20:07 - testbed-volume-2-node-5 2026-01-07 01:20:07.080104 | orchestrator | 2026-01-07 01:20:07 - testbed-volume-1-node-4 2026-01-07 01:20:07.125913 | orchestrator | 2026-01-07 01:20:07 - disconnect routers 2026-01-07 01:20:07.231714 | orchestrator | 2026-01-07 01:20:07 - testbed 2026-01-07 01:20:08.283545 | orchestrator | 2026-01-07 01:20:08 - clean up subnets 2026-01-07 01:20:08.328010 | orchestrator | 2026-01-07 01:20:08 - subnet-testbed-management 2026-01-07 01:20:08.501495 | orchestrator | 2026-01-07 01:20:08 - clean up networks 2026-01-07 01:20:08.677011 | orchestrator | 2026-01-07 01:20:08 - net-testbed-management 2026-01-07 01:20:08.941908 | orchestrator | 2026-01-07 01:20:08 - clean up security groups 2026-01-07 01:20:08.984482 | orchestrator | 2026-01-07 01:20:08 - testbed-node 2026-01-07 01:20:09.100816 | orchestrator | 2026-01-07 01:20:09 - testbed-management 2026-01-07 01:20:09.203063 | orchestrator | 2026-01-07 01:20:09 - clean up floating ips 2026-01-07 01:20:09.235197 | orchestrator | 2026-01-07 01:20:09 - 81.163.192.221 2026-01-07 01:20:09.620067 | orchestrator | 2026-01-07 01:20:09 - clean up routers 2026-01-07 01:20:09.731670 | orchestrator | 2026-01-07 01:20:09 - testbed 2026-01-07 01:20:11.321464 | orchestrator | ok: Runtime: 0:00:21.992810 2026-01-07 01:20:11.326308 | 2026-01-07 01:20:11.326501 | PLAY RECAP 2026-01-07 01:20:11.326618 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-07 01:20:11.326669 | 2026-01-07 01:20:11.496654 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:20:11.497762 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:20:12.290739 | 2026-01-07 01:20:12.290949 | PLAY [Cleanup play] 2026-01-07 01:20:12.309421 | 2026-01-07 01:20:12.309580 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:20:12.367846 | orchestrator | ok 2026-01-07 01:20:12.376964 | 2026-01-07 01:20:12.377127 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:20:12.412020 | orchestrator | skipping: Conditional result was False 2026-01-07 01:20:12.430331 | 2026-01-07 01:20:12.430581 | TASK [Clean the cloud environment] 2026-01-07 01:20:13.698611 | orchestrator | 2026-01-07 01:20:13 - clean up servers 2026-01-07 01:20:14.248062 | orchestrator | 2026-01-07 01:20:14 - clean up keypairs 2026-01-07 01:20:14.263142 | orchestrator | 2026-01-07 01:20:14 - wait for servers to be gone 2026-01-07 01:20:14.305895 | orchestrator | 2026-01-07 01:20:14 - clean up ports 2026-01-07 01:20:14.381495 | orchestrator | 2026-01-07 01:20:14 - clean up volumes 2026-01-07 01:20:14.454302 | orchestrator | 2026-01-07 01:20:14 - disconnect routers 2026-01-07 01:20:14.484516 | orchestrator | 2026-01-07 01:20:14 - clean up subnets 2026-01-07 01:20:14.503695 | orchestrator | 2026-01-07 01:20:14 - clean up networks 2026-01-07 01:20:14.690765 | orchestrator | 2026-01-07 01:20:14 - clean up security groups 2026-01-07 01:20:14.729043 | orchestrator | 2026-01-07 01:20:14 - clean up floating ips 2026-01-07 01:20:14.753126 | orchestrator | 2026-01-07 01:20:14 - clean up routers 2026-01-07 01:20:14.970128 | orchestrator | ok: Runtime: 0:00:01.538087 2026-01-07 01:20:14.974152 | 2026-01-07 01:20:14.974369 | PLAY RECAP 2026-01-07 01:20:14.974568 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-07 01:20:14.974671 | 2026-01-07 01:20:15.118624 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:20:15.121241 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:20:15.921097 | 2026-01-07 01:20:15.921277 | PLAY [Base post-fetch] 2026-01-07 01:20:15.937858 | 2026-01-07 01:20:15.938027 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-07 01:20:16.013469 | orchestrator | skipping: Conditional result was False 2026-01-07 01:20:16.030373 | 2026-01-07 01:20:16.030674 | TASK [fetch-output : Set log path for single node] 2026-01-07 01:20:16.080757 | orchestrator | ok 2026-01-07 01:20:16.091616 | 2026-01-07 01:20:16.091817 | LOOP [fetch-output : Ensure local output dirs] 2026-01-07 01:20:16.642188 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/logs" 2026-01-07 01:20:16.953751 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/artifacts" 2026-01-07 01:20:17.258090 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f153efee2c894661b1982c7c4bcd0469/work/docs" 2026-01-07 01:20:17.280766 | 2026-01-07 01:20:17.280953 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-07 01:20:18.219263 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:20:18.219655 | orchestrator | changed: All items complete 2026-01-07 01:20:18.219713 | 2026-01-07 01:20:18.935542 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:20:19.705701 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:20:19.723694 | 2026-01-07 01:20:19.723892 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-07 01:20:19.763530 | orchestrator | skipping: Conditional result was False 2026-01-07 01:20:19.766331 | orchestrator | skipping: Conditional result was False 2026-01-07 01:20:19.793529 | 2026-01-07 01:20:19.793696 | PLAY RECAP 2026-01-07 01:20:19.793788 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-07 01:20:19.793830 | 2026-01-07 01:20:19.957299 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:20:19.958575 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:20:20.723828 | 2026-01-07 01:20:20.723997 | PLAY [Base post] 2026-01-07 01:20:20.739690 | 2026-01-07 01:20:20.739844 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-07 01:20:21.831578 | orchestrator | changed 2026-01-07 01:20:21.840222 | 2026-01-07 01:20:21.840342 | PLAY RECAP 2026-01-07 01:20:21.840439 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-07 01:20:21.840506 | 2026-01-07 01:20:21.970047 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:20:21.971154 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-07 01:20:22.803206 | 2026-01-07 01:20:22.803415 | PLAY [Base post-logs] 2026-01-07 01:20:22.814748 | 2026-01-07 01:20:22.814934 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-07 01:20:23.297799 | localhost | changed 2026-01-07 01:20:23.321236 | 2026-01-07 01:20:23.321482 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-07 01:20:23.350995 | localhost | ok 2026-01-07 01:20:23.359017 | 2026-01-07 01:20:23.359257 | TASK [Set zuul-log-path fact] 2026-01-07 01:20:23.390322 | localhost | ok 2026-01-07 01:20:23.406970 | 2026-01-07 01:20:23.407167 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 01:20:23.446108 | localhost | ok 2026-01-07 01:20:23.452673 | 2026-01-07 01:20:23.452863 | TASK [upload-logs : Create log directories] 2026-01-07 01:20:24.007557 | localhost | changed 2026-01-07 01:20:24.014501 | 2026-01-07 01:20:24.014728 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-07 01:20:24.622280 | localhost -> localhost | ok: Runtime: 0:00:00.008372 2026-01-07 01:20:24.631732 | 2026-01-07 01:20:24.631925 | TASK [upload-logs : Upload logs to log server] 2026-01-07 01:20:25.251948 | localhost | Output suppressed because no_log was given 2026-01-07 01:20:25.256242 | 2026-01-07 01:20:25.256465 | LOOP [upload-logs : Compress console log and json output] 2026-01-07 01:20:25.329063 | localhost | skipping: Conditional result was False 2026-01-07 01:20:25.332138 | localhost | skipping: Conditional result was False 2026-01-07 01:20:25.349924 | 2026-01-07 01:20:25.350211 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-07 01:20:25.413285 | localhost | skipping: Conditional result was False 2026-01-07 01:20:25.413715 | 2026-01-07 01:20:25.419653 | localhost | skipping: Conditional result was False 2026-01-07 01:20:25.433821 | 2026-01-07 01:20:25.434035 | LOOP [upload-logs : Upload console log and json output]